Test Report: KVM_Linux_crio 19084

                    
                      7ef7da66050fbee35d8f820fabec0ee963fd337e:2024-06-17:34930
                    
                

Test fail (32/314)

Order failed test Duration
30 TestAddons/parallel/Ingress 153.33
32 TestAddons/parallel/MetricsServer 332.77
45 TestAddons/StoppedEnableDisable 154.37
122 TestFunctional/parallel/ImageCommands/ImageBuild 4.89
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 10.81
164 TestMultiControlPlane/serial/StopSecondaryNode 141.93
166 TestMultiControlPlane/serial/RestartSecondaryNode 61.84
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 361.37
171 TestMultiControlPlane/serial/StopCluster 141.61
231 TestMultiNode/serial/RestartKeepsNodes 304.8
233 TestMultiNode/serial/StopMultiNode 141.35
240 TestPreload 168.47
248 TestKubernetesUpgrade 371.15
275 TestPause/serial/SecondStartNoReconfiguration 64.26
284 TestStartStop/group/old-k8s-version/serial/FirstStart 281.37
293 TestStartStop/group/no-preload/serial/Stop 139.05
295 TestStartStop/group/embed-certs/serial/Stop 139.18
298 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.39
299 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
301 TestStartStop/group/old-k8s-version/serial/DeployApp 0.5
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 95.72
306 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.1
309 TestStartStop/group/old-k8s-version/serial/SecondStart 701.9
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
312 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.1
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.11
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.15
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.32
316 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 462.64
317 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 542.81
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 322.93
319 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 179.1
x
+
TestAddons/parallel/Ingress (153.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-465706 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-465706 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-465706 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [83bd573f-7cbc-4b39-a885-d2024b2fb1f1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [83bd573f-7cbc-4b39-a885-d2024b2fb1f1] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004241832s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-465706 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-465706 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.661449705s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-465706 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-465706 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.165
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-465706 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-465706 addons disable ingress-dns --alsologtostderr -v=1: (2.04955112s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-465706 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-465706 addons disable ingress --alsologtostderr -v=1: (7.697372803s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-465706 -n addons-465706
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-465706 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-465706 logs -n 25: (1.195466168s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-999061 | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC |                     |
	|         | -p download-only-999061                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC | 17 Jun 24 10:44 UTC |
	| delete  | -p download-only-999061                                                                     | download-only-999061 | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC | 17 Jun 24 10:44 UTC |
	| delete  | -p download-only-033984                                                                     | download-only-033984 | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC | 17 Jun 24 10:44 UTC |
	| delete  | -p download-only-999061                                                                     | download-only-999061 | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC | 17 Jun 24 10:44 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-716953 | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC |                     |
	|         | binary-mirror-716953                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44727                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-716953                                                                     | binary-mirror-716953 | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC | 17 Jun 24 10:44 UTC |
	| addons  | enable dashboard -p                                                                         | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC |                     |
	|         | addons-465706                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC |                     |
	|         | addons-465706                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-465706 --wait=true                                                                | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC | 17 Jun 24 10:46 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:46 UTC | 17 Jun 24 10:46 UTC |
	|         | -p addons-465706                                                                            |                      |         |         |                     |                     |
	| addons  | addons-465706 addons disable                                                                | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:47 UTC | 17 Jun 24 10:47 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:47 UTC | 17 Jun 24 10:47 UTC |
	|         | -p addons-465706                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:47 UTC | 17 Jun 24 10:47 UTC |
	|         | addons-465706                                                                               |                      |         |         |                     |                     |
	| ip      | addons-465706 ip                                                                            | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:47 UTC | 17 Jun 24 10:47 UTC |
	| addons  | addons-465706 addons disable                                                                | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:47 UTC | 17 Jun 24 10:47 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-465706 ssh cat                                                                       | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:47 UTC | 17 Jun 24 10:47 UTC |
	|         | /opt/local-path-provisioner/pvc-f296beee-9e3b-4086-a049-00efb1334af0_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-465706 addons disable                                                                | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:47 UTC | 17 Jun 24 10:47 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-465706 ssh curl -s                                                                   | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:47 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:48 UTC | 17 Jun 24 10:48 UTC |
	|         | addons-465706                                                                               |                      |         |         |                     |                     |
	| addons  | addons-465706 addons                                                                        | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:48 UTC | 17 Jun 24 10:48 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-465706 addons                                                                        | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:48 UTC | 17 Jun 24 10:48 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-465706 ip                                                                            | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:49 UTC | 17 Jun 24 10:49 UTC |
	| addons  | addons-465706 addons disable                                                                | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:49 UTC | 17 Jun 24 10:49 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-465706 addons disable                                                                | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:49 UTC | 17 Jun 24 10:49 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 10:44:27
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 10:44:27.955434  120744 out.go:291] Setting OutFile to fd 1 ...
	I0617 10:44:27.955608  120744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 10:44:27.955618  120744 out.go:304] Setting ErrFile to fd 2...
	I0617 10:44:27.955623  120744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 10:44:27.955818  120744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 10:44:27.956449  120744 out.go:298] Setting JSON to false
	I0617 10:44:27.957418  120744 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1615,"bootTime":1718619453,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 10:44:27.957481  120744 start.go:139] virtualization: kvm guest
	I0617 10:44:27.959489  120744 out.go:177] * [addons-465706] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 10:44:27.960639  120744 notify.go:220] Checking for updates...
	I0617 10:44:27.960647  120744 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 10:44:27.962147  120744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 10:44:27.963411  120744 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 10:44:27.964894  120744 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 10:44:27.966317  120744 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 10:44:27.967418  120744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 10:44:27.968881  120744 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 10:44:28.000585  120744 out.go:177] * Using the kvm2 driver based on user configuration
	I0617 10:44:28.001772  120744 start.go:297] selected driver: kvm2
	I0617 10:44:28.001787  120744 start.go:901] validating driver "kvm2" against <nil>
	I0617 10:44:28.001803  120744 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 10:44:28.002465  120744 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 10:44:28.002525  120744 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 10:44:28.017207  120744 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 10:44:28.017258  120744 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 10:44:28.017507  120744 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 10:44:28.017536  120744 cni.go:84] Creating CNI manager for ""
	I0617 10:44:28.017543  120744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 10:44:28.017549  120744 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 10:44:28.017604  120744 start.go:340] cluster config:
	{Name:addons-465706 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-465706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 10:44:28.017711  120744 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 10:44:28.019300  120744 out.go:177] * Starting "addons-465706" primary control-plane node in "addons-465706" cluster
	I0617 10:44:28.020368  120744 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 10:44:28.020400  120744 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 10:44:28.020408  120744 cache.go:56] Caching tarball of preloaded images
	I0617 10:44:28.020482  120744 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 10:44:28.020492  120744 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 10:44:28.020826  120744 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/config.json ...
	I0617 10:44:28.020848  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/config.json: {Name:mkffc5f87639ab857d7a39c36743c03a7f1d71d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:44:28.020969  120744 start.go:360] acquireMachinesLock for addons-465706: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 10:44:28.021010  120744 start.go:364] duration metric: took 28.888µs to acquireMachinesLock for "addons-465706"
	I0617 10:44:28.021026  120744 start.go:93] Provisioning new machine with config: &{Name:addons-465706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-465706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 10:44:28.021073  120744 start.go:125] createHost starting for "" (driver="kvm2")
	I0617 10:44:28.022439  120744 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0617 10:44:28.022562  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:44:28.022611  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:44:28.036677  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
	I0617 10:44:28.037174  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:44:28.037751  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:44:28.037772  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:44:28.038172  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:44:28.038397  120744 main.go:141] libmachine: (addons-465706) Calling .GetMachineName
	I0617 10:44:28.038557  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:44:28.038774  120744 start.go:159] libmachine.API.Create for "addons-465706" (driver="kvm2")
	I0617 10:44:28.038801  120744 client.go:168] LocalClient.Create starting
	I0617 10:44:28.038840  120744 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem
	I0617 10:44:28.381448  120744 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem
	I0617 10:44:28.499704  120744 main.go:141] libmachine: Running pre-create checks...
	I0617 10:44:28.499733  120744 main.go:141] libmachine: (addons-465706) Calling .PreCreateCheck
	I0617 10:44:28.500260  120744 main.go:141] libmachine: (addons-465706) Calling .GetConfigRaw
	I0617 10:44:28.500885  120744 main.go:141] libmachine: Creating machine...
	I0617 10:44:28.500899  120744 main.go:141] libmachine: (addons-465706) Calling .Create
	I0617 10:44:28.501059  120744 main.go:141] libmachine: (addons-465706) Creating KVM machine...
	I0617 10:44:28.502437  120744 main.go:141] libmachine: (addons-465706) DBG | found existing default KVM network
	I0617 10:44:28.503224  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:28.503082  120766 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c10}
	I0617 10:44:28.503282  120744 main.go:141] libmachine: (addons-465706) DBG | created network xml: 
	I0617 10:44:28.503309  120744 main.go:141] libmachine: (addons-465706) DBG | <network>
	I0617 10:44:28.503353  120744 main.go:141] libmachine: (addons-465706) DBG |   <name>mk-addons-465706</name>
	I0617 10:44:28.503373  120744 main.go:141] libmachine: (addons-465706) DBG |   <dns enable='no'/>
	I0617 10:44:28.503380  120744 main.go:141] libmachine: (addons-465706) DBG |   
	I0617 10:44:28.503386  120744 main.go:141] libmachine: (addons-465706) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0617 10:44:28.503393  120744 main.go:141] libmachine: (addons-465706) DBG |     <dhcp>
	I0617 10:44:28.503398  120744 main.go:141] libmachine: (addons-465706) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0617 10:44:28.503404  120744 main.go:141] libmachine: (addons-465706) DBG |     </dhcp>
	I0617 10:44:28.503410  120744 main.go:141] libmachine: (addons-465706) DBG |   </ip>
	I0617 10:44:28.503414  120744 main.go:141] libmachine: (addons-465706) DBG |   
	I0617 10:44:28.503419  120744 main.go:141] libmachine: (addons-465706) DBG | </network>
	I0617 10:44:28.503427  120744 main.go:141] libmachine: (addons-465706) DBG | 
	I0617 10:44:28.508778  120744 main.go:141] libmachine: (addons-465706) DBG | trying to create private KVM network mk-addons-465706 192.168.39.0/24...
	I0617 10:44:28.575241  120744 main.go:141] libmachine: (addons-465706) DBG | private KVM network mk-addons-465706 192.168.39.0/24 created
	I0617 10:44:28.575274  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:28.575193  120766 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 10:44:28.575308  120744 main.go:141] libmachine: (addons-465706) Setting up store path in /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706 ...
	I0617 10:44:28.575331  120744 main.go:141] libmachine: (addons-465706) Building disk image from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso
	I0617 10:44:28.575359  120744 main.go:141] libmachine: (addons-465706) Downloading /home/jenkins/minikube-integration/19084-112967/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0617 10:44:28.813936  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:28.813743  120766 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa...
	I0617 10:44:28.942638  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:28.942504  120766 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/addons-465706.rawdisk...
	I0617 10:44:28.942666  120744 main.go:141] libmachine: (addons-465706) DBG | Writing magic tar header
	I0617 10:44:28.942745  120744 main.go:141] libmachine: (addons-465706) DBG | Writing SSH key tar header
	I0617 10:44:28.942793  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:28.942642  120766 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706 ...
	I0617 10:44:28.942823  120744 main.go:141] libmachine: (addons-465706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706
	I0617 10:44:28.942844  120744 main.go:141] libmachine: (addons-465706) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706 (perms=drwx------)
	I0617 10:44:28.942876  120744 main.go:141] libmachine: (addons-465706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines
	I0617 10:44:28.942890  120744 main.go:141] libmachine: (addons-465706) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines (perms=drwxr-xr-x)
	I0617 10:44:28.942903  120744 main.go:141] libmachine: (addons-465706) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube (perms=drwxr-xr-x)
	I0617 10:44:28.942912  120744 main.go:141] libmachine: (addons-465706) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967 (perms=drwxrwxr-x)
	I0617 10:44:28.942920  120744 main.go:141] libmachine: (addons-465706) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0617 10:44:28.942926  120744 main.go:141] libmachine: (addons-465706) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0617 10:44:28.942938  120744 main.go:141] libmachine: (addons-465706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 10:44:28.942947  120744 main.go:141] libmachine: (addons-465706) Creating domain...
	I0617 10:44:28.942961  120744 main.go:141] libmachine: (addons-465706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967
	I0617 10:44:28.942972  120744 main.go:141] libmachine: (addons-465706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0617 10:44:28.942984  120744 main.go:141] libmachine: (addons-465706) DBG | Checking permissions on dir: /home/jenkins
	I0617 10:44:28.942995  120744 main.go:141] libmachine: (addons-465706) DBG | Checking permissions on dir: /home
	I0617 10:44:28.943005  120744 main.go:141] libmachine: (addons-465706) DBG | Skipping /home - not owner
	I0617 10:44:28.944173  120744 main.go:141] libmachine: (addons-465706) define libvirt domain using xml: 
	I0617 10:44:28.944216  120744 main.go:141] libmachine: (addons-465706) <domain type='kvm'>
	I0617 10:44:28.944226  120744 main.go:141] libmachine: (addons-465706)   <name>addons-465706</name>
	I0617 10:44:28.944235  120744 main.go:141] libmachine: (addons-465706)   <memory unit='MiB'>4000</memory>
	I0617 10:44:28.944241  120744 main.go:141] libmachine: (addons-465706)   <vcpu>2</vcpu>
	I0617 10:44:28.944245  120744 main.go:141] libmachine: (addons-465706)   <features>
	I0617 10:44:28.944251  120744 main.go:141] libmachine: (addons-465706)     <acpi/>
	I0617 10:44:28.944258  120744 main.go:141] libmachine: (addons-465706)     <apic/>
	I0617 10:44:28.944263  120744 main.go:141] libmachine: (addons-465706)     <pae/>
	I0617 10:44:28.944270  120744 main.go:141] libmachine: (addons-465706)     
	I0617 10:44:28.944275  120744 main.go:141] libmachine: (addons-465706)   </features>
	I0617 10:44:28.944281  120744 main.go:141] libmachine: (addons-465706)   <cpu mode='host-passthrough'>
	I0617 10:44:28.944286  120744 main.go:141] libmachine: (addons-465706)   
	I0617 10:44:28.944304  120744 main.go:141] libmachine: (addons-465706)   </cpu>
	I0617 10:44:28.944311  120744 main.go:141] libmachine: (addons-465706)   <os>
	I0617 10:44:28.944317  120744 main.go:141] libmachine: (addons-465706)     <type>hvm</type>
	I0617 10:44:28.944324  120744 main.go:141] libmachine: (addons-465706)     <boot dev='cdrom'/>
	I0617 10:44:28.944329  120744 main.go:141] libmachine: (addons-465706)     <boot dev='hd'/>
	I0617 10:44:28.944337  120744 main.go:141] libmachine: (addons-465706)     <bootmenu enable='no'/>
	I0617 10:44:28.944370  120744 main.go:141] libmachine: (addons-465706)   </os>
	I0617 10:44:28.944391  120744 main.go:141] libmachine: (addons-465706)   <devices>
	I0617 10:44:28.944405  120744 main.go:141] libmachine: (addons-465706)     <disk type='file' device='cdrom'>
	I0617 10:44:28.944421  120744 main.go:141] libmachine: (addons-465706)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/boot2docker.iso'/>
	I0617 10:44:28.944435  120744 main.go:141] libmachine: (addons-465706)       <target dev='hdc' bus='scsi'/>
	I0617 10:44:28.944446  120744 main.go:141] libmachine: (addons-465706)       <readonly/>
	I0617 10:44:28.944458  120744 main.go:141] libmachine: (addons-465706)     </disk>
	I0617 10:44:28.944470  120744 main.go:141] libmachine: (addons-465706)     <disk type='file' device='disk'>
	I0617 10:44:28.944496  120744 main.go:141] libmachine: (addons-465706)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0617 10:44:28.944518  120744 main.go:141] libmachine: (addons-465706)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/addons-465706.rawdisk'/>
	I0617 10:44:28.944531  120744 main.go:141] libmachine: (addons-465706)       <target dev='hda' bus='virtio'/>
	I0617 10:44:28.944542  120744 main.go:141] libmachine: (addons-465706)     </disk>
	I0617 10:44:28.944554  120744 main.go:141] libmachine: (addons-465706)     <interface type='network'>
	I0617 10:44:28.944567  120744 main.go:141] libmachine: (addons-465706)       <source network='mk-addons-465706'/>
	I0617 10:44:28.944579  120744 main.go:141] libmachine: (addons-465706)       <model type='virtio'/>
	I0617 10:44:28.944590  120744 main.go:141] libmachine: (addons-465706)     </interface>
	I0617 10:44:28.944601  120744 main.go:141] libmachine: (addons-465706)     <interface type='network'>
	I0617 10:44:28.944610  120744 main.go:141] libmachine: (addons-465706)       <source network='default'/>
	I0617 10:44:28.944616  120744 main.go:141] libmachine: (addons-465706)       <model type='virtio'/>
	I0617 10:44:28.944622  120744 main.go:141] libmachine: (addons-465706)     </interface>
	I0617 10:44:28.944631  120744 main.go:141] libmachine: (addons-465706)     <serial type='pty'>
	I0617 10:44:28.944638  120744 main.go:141] libmachine: (addons-465706)       <target port='0'/>
	I0617 10:44:28.944643  120744 main.go:141] libmachine: (addons-465706)     </serial>
	I0617 10:44:28.944653  120744 main.go:141] libmachine: (addons-465706)     <console type='pty'>
	I0617 10:44:28.944659  120744 main.go:141] libmachine: (addons-465706)       <target type='serial' port='0'/>
	I0617 10:44:28.944665  120744 main.go:141] libmachine: (addons-465706)     </console>
	I0617 10:44:28.944671  120744 main.go:141] libmachine: (addons-465706)     <rng model='virtio'>
	I0617 10:44:28.944679  120744 main.go:141] libmachine: (addons-465706)       <backend model='random'>/dev/random</backend>
	I0617 10:44:28.944684  120744 main.go:141] libmachine: (addons-465706)     </rng>
	I0617 10:44:28.944694  120744 main.go:141] libmachine: (addons-465706)     
	I0617 10:44:28.944699  120744 main.go:141] libmachine: (addons-465706)     
	I0617 10:44:28.944706  120744 main.go:141] libmachine: (addons-465706)   </devices>
	I0617 10:44:28.944724  120744 main.go:141] libmachine: (addons-465706) </domain>
	I0617 10:44:28.944741  120744 main.go:141] libmachine: (addons-465706) 
	I0617 10:44:28.950418  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:85:f6:97 in network default
	I0617 10:44:28.950926  120744 main.go:141] libmachine: (addons-465706) Ensuring networks are active...
	I0617 10:44:28.950972  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:28.951585  120744 main.go:141] libmachine: (addons-465706) Ensuring network default is active
	I0617 10:44:28.951897  120744 main.go:141] libmachine: (addons-465706) Ensuring network mk-addons-465706 is active
	I0617 10:44:28.955554  120744 main.go:141] libmachine: (addons-465706) Getting domain xml...
	I0617 10:44:28.956178  120744 main.go:141] libmachine: (addons-465706) Creating domain...
	I0617 10:44:30.304315  120744 main.go:141] libmachine: (addons-465706) Waiting to get IP...
	I0617 10:44:30.305032  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:30.305530  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:30.305558  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:30.305478  120766 retry.go:31] will retry after 205.154739ms: waiting for machine to come up
	I0617 10:44:30.511772  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:30.512203  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:30.512237  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:30.512148  120766 retry.go:31] will retry after 373.675802ms: waiting for machine to come up
	I0617 10:44:30.887876  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:30.888324  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:30.888350  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:30.888289  120766 retry.go:31] will retry after 304.632968ms: waiting for machine to come up
	I0617 10:44:31.194758  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:31.195188  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:31.195214  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:31.195138  120766 retry.go:31] will retry after 440.608798ms: waiting for machine to come up
	I0617 10:44:31.637691  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:31.638085  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:31.638118  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:31.638065  120766 retry.go:31] will retry after 717.121475ms: waiting for machine to come up
	I0617 10:44:32.357058  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:32.357539  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:32.357567  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:32.357475  120766 retry.go:31] will retry after 575.962657ms: waiting for machine to come up
	I0617 10:44:32.936828  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:32.937257  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:32.937289  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:32.937200  120766 retry.go:31] will retry after 765.587119ms: waiting for machine to come up
	I0617 10:44:33.704859  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:33.705362  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:33.705418  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:33.705334  120766 retry.go:31] will retry after 983.377485ms: waiting for machine to come up
	I0617 10:44:34.690431  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:34.690787  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:34.690811  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:34.690736  120766 retry.go:31] will retry after 1.699511808s: waiting for machine to come up
	I0617 10:44:36.391533  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:36.391970  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:36.392013  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:36.391924  120766 retry.go:31] will retry after 2.204970783s: waiting for machine to come up
	I0617 10:44:38.598427  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:38.598765  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:38.598814  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:38.598742  120766 retry.go:31] will retry after 2.728575827s: waiting for machine to come up
	I0617 10:44:41.328631  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:41.328974  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:41.328997  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:41.328923  120766 retry.go:31] will retry after 2.416284504s: waiting for machine to come up
	I0617 10:44:43.747002  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:43.747523  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:43.747559  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:43.747445  120766 retry.go:31] will retry after 3.42194274s: waiting for machine to come up
	I0617 10:44:47.173064  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:47.173527  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:47.173558  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:47.173482  120766 retry.go:31] will retry after 4.529341226s: waiting for machine to come up
	I0617 10:44:51.707208  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:51.707707  120744 main.go:141] libmachine: (addons-465706) Found IP for machine: 192.168.39.165
	I0617 10:44:51.707729  120744 main.go:141] libmachine: (addons-465706) Reserving static IP address...
	I0617 10:44:51.707756  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has current primary IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:51.708099  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find host DHCP lease matching {name: "addons-465706", mac: "52:54:00:56:ab:02", ip: "192.168.39.165"} in network mk-addons-465706
	I0617 10:44:51.778052  120744 main.go:141] libmachine: (addons-465706) DBG | Getting to WaitForSSH function...
	I0617 10:44:51.778089  120744 main.go:141] libmachine: (addons-465706) Reserved static IP address: 192.168.39.165
	I0617 10:44:51.778116  120744 main.go:141] libmachine: (addons-465706) Waiting for SSH to be available...
	I0617 10:44:51.780684  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:51.781009  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706
	I0617 10:44:51.781029  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find defined IP address of network mk-addons-465706 interface with MAC address 52:54:00:56:ab:02
	I0617 10:44:51.781194  120744 main.go:141] libmachine: (addons-465706) DBG | Using SSH client type: external
	I0617 10:44:51.781216  120744 main.go:141] libmachine: (addons-465706) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa (-rw-------)
	I0617 10:44:51.781278  120744 main.go:141] libmachine: (addons-465706) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 10:44:51.781304  120744 main.go:141] libmachine: (addons-465706) DBG | About to run SSH command:
	I0617 10:44:51.781328  120744 main.go:141] libmachine: (addons-465706) DBG | exit 0
	I0617 10:44:51.792008  120744 main.go:141] libmachine: (addons-465706) DBG | SSH cmd err, output: exit status 255: 
	I0617 10:44:51.792036  120744 main.go:141] libmachine: (addons-465706) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0617 10:44:51.792054  120744 main.go:141] libmachine: (addons-465706) DBG | command : exit 0
	I0617 10:44:51.792077  120744 main.go:141] libmachine: (addons-465706) DBG | err     : exit status 255
	I0617 10:44:51.792089  120744 main.go:141] libmachine: (addons-465706) DBG | output  : 
	I0617 10:44:54.793800  120744 main.go:141] libmachine: (addons-465706) DBG | Getting to WaitForSSH function...
	I0617 10:44:54.796150  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:54.796525  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:54.796554  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:54.796575  120744 main.go:141] libmachine: (addons-465706) DBG | Using SSH client type: external
	I0617 10:44:54.796610  120744 main.go:141] libmachine: (addons-465706) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa (-rw-------)
	I0617 10:44:54.796647  120744 main.go:141] libmachine: (addons-465706) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.165 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 10:44:54.796659  120744 main.go:141] libmachine: (addons-465706) DBG | About to run SSH command:
	I0617 10:44:54.796664  120744 main.go:141] libmachine: (addons-465706) DBG | exit 0
	I0617 10:44:54.919754  120744 main.go:141] libmachine: (addons-465706) DBG | SSH cmd err, output: <nil>: 
	I0617 10:44:54.920057  120744 main.go:141] libmachine: (addons-465706) KVM machine creation complete!
	I0617 10:44:54.920352  120744 main.go:141] libmachine: (addons-465706) Calling .GetConfigRaw
	I0617 10:44:54.920915  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:44:54.921154  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:44:54.921325  120744 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0617 10:44:54.921341  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:44:54.922475  120744 main.go:141] libmachine: Detecting operating system of created instance...
	I0617 10:44:54.922487  120744 main.go:141] libmachine: Waiting for SSH to be available...
	I0617 10:44:54.922493  120744 main.go:141] libmachine: Getting to WaitForSSH function...
	I0617 10:44:54.922499  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:54.924743  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:54.925084  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:54.925110  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:54.925285  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:54.925451  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:54.925599  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:54.925695  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:54.925832  120744 main.go:141] libmachine: Using SSH client type: native
	I0617 10:44:54.926051  120744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0617 10:44:54.926071  120744 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0617 10:44:55.026866  120744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 10:44:55.026895  120744 main.go:141] libmachine: Detecting the provisioner...
	I0617 10:44:55.026905  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:55.029529  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.029843  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:55.029901  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.029995  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:55.030192  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.030420  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.030559  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:55.030727  120744 main.go:141] libmachine: Using SSH client type: native
	I0617 10:44:55.030943  120744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0617 10:44:55.030956  120744 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0617 10:44:55.132335  120744 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0617 10:44:55.132405  120744 main.go:141] libmachine: found compatible host: buildroot
	I0617 10:44:55.132412  120744 main.go:141] libmachine: Provisioning with buildroot...
	I0617 10:44:55.132420  120744 main.go:141] libmachine: (addons-465706) Calling .GetMachineName
	I0617 10:44:55.132687  120744 buildroot.go:166] provisioning hostname "addons-465706"
	I0617 10:44:55.132712  120744 main.go:141] libmachine: (addons-465706) Calling .GetMachineName
	I0617 10:44:55.132897  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:55.135736  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.136157  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:55.136184  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.136328  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:55.136505  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.136680  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.136817  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:55.136986  120744 main.go:141] libmachine: Using SSH client type: native
	I0617 10:44:55.137151  120744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0617 10:44:55.137164  120744 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-465706 && echo "addons-465706" | sudo tee /etc/hostname
	I0617 10:44:55.253835  120744 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-465706
	
	I0617 10:44:55.253865  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:55.256633  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.257047  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:55.257078  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.257217  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:55.257430  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.257605  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.257768  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:55.257953  120744 main.go:141] libmachine: Using SSH client type: native
	I0617 10:44:55.258139  120744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0617 10:44:55.258162  120744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-465706' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-465706/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-465706' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 10:44:55.368762  120744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 10:44:55.368799  120744 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 10:44:55.368825  120744 buildroot.go:174] setting up certificates
	I0617 10:44:55.368842  120744 provision.go:84] configureAuth start
	I0617 10:44:55.368854  120744 main.go:141] libmachine: (addons-465706) Calling .GetMachineName
	I0617 10:44:55.369138  120744 main.go:141] libmachine: (addons-465706) Calling .GetIP
	I0617 10:44:55.371766  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.372132  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:55.372160  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.372255  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:55.374409  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.374814  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:55.374841  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.374992  120744 provision.go:143] copyHostCerts
	I0617 10:44:55.375090  120744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 10:44:55.375233  120744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 10:44:55.375331  120744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 10:44:55.375400  120744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.addons-465706 san=[127.0.0.1 192.168.39.165 addons-465706 localhost minikube]
	I0617 10:44:55.533485  120744 provision.go:177] copyRemoteCerts
	I0617 10:44:55.533547  120744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 10:44:55.533575  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:55.536165  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.536487  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:55.536507  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.536709  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:55.536899  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.537011  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:55.537176  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:44:55.617634  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 10:44:55.641398  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0617 10:44:55.665130  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 10:44:55.688524  120744 provision.go:87] duration metric: took 319.667768ms to configureAuth
	I0617 10:44:55.688551  120744 buildroot.go:189] setting minikube options for container-runtime
	I0617 10:44:55.688736  120744 config.go:182] Loaded profile config "addons-465706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 10:44:55.688836  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:55.691442  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.691847  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:55.691871  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.692083  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:55.692275  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.692468  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.692569  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:55.692717  120744 main.go:141] libmachine: Using SSH client type: native
	I0617 10:44:55.692914  120744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0617 10:44:55.692930  120744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 10:44:55.958144  120744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 10:44:55.958181  120744 main.go:141] libmachine: Checking connection to Docker...
	I0617 10:44:55.958192  120744 main.go:141] libmachine: (addons-465706) Calling .GetURL
	I0617 10:44:55.959665  120744 main.go:141] libmachine: (addons-465706) DBG | Using libvirt version 6000000
	I0617 10:44:55.961978  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.962303  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:55.962332  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.962499  120744 main.go:141] libmachine: Docker is up and running!
	I0617 10:44:55.962518  120744 main.go:141] libmachine: Reticulating splines...
	I0617 10:44:55.962528  120744 client.go:171] duration metric: took 27.923717949s to LocalClient.Create
	I0617 10:44:55.962556  120744 start.go:167] duration metric: took 27.923781269s to libmachine.API.Create "addons-465706"
	I0617 10:44:55.962649  120744 start.go:293] postStartSetup for "addons-465706" (driver="kvm2")
	I0617 10:44:55.962664  120744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 10:44:55.962691  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:44:55.962982  120744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 10:44:55.963011  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:55.965590  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.965916  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:55.965942  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.966140  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:55.966329  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.966481  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:55.966621  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:44:56.049906  120744 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 10:44:56.054363  120744 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 10:44:56.054388  120744 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 10:44:56.054468  120744 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 10:44:56.054494  120744 start.go:296] duration metric: took 91.836115ms for postStartSetup
	I0617 10:44:56.054540  120744 main.go:141] libmachine: (addons-465706) Calling .GetConfigRaw
	I0617 10:44:56.055139  120744 main.go:141] libmachine: (addons-465706) Calling .GetIP
	I0617 10:44:56.057965  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.058201  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:56.058228  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.058472  120744 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/config.json ...
	I0617 10:44:56.058710  120744 start.go:128] duration metric: took 28.037606315s to createHost
	I0617 10:44:56.058746  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:56.061067  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.061403  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:56.061432  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.061560  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:56.061764  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:56.061912  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:56.062055  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:56.062239  120744 main.go:141] libmachine: Using SSH client type: native
	I0617 10:44:56.062406  120744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0617 10:44:56.062420  120744 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 10:44:56.164239  120744 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718621096.143697356
	
	I0617 10:44:56.164268  120744 fix.go:216] guest clock: 1718621096.143697356
	I0617 10:44:56.164279  120744 fix.go:229] Guest: 2024-06-17 10:44:56.143697356 +0000 UTC Remote: 2024-06-17 10:44:56.058725836 +0000 UTC m=+28.138871245 (delta=84.97152ms)
	I0617 10:44:56.164328  120744 fix.go:200] guest clock delta is within tolerance: 84.97152ms
	I0617 10:44:56.164337  120744 start.go:83] releasing machines lock for "addons-465706", held for 28.143318845s
	I0617 10:44:56.164366  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:44:56.164690  120744 main.go:141] libmachine: (addons-465706) Calling .GetIP
	I0617 10:44:56.167163  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.167526  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:56.167557  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.167729  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:44:56.168332  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:44:56.168517  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:44:56.168611  120744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 10:44:56.168654  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:56.168887  120744 ssh_runner.go:195] Run: cat /version.json
	I0617 10:44:56.168913  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:56.171110  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.171321  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:56.171341  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.171375  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.171546  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:56.171721  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:56.171800  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:56.171822  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.171901  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:56.171991  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:56.172079  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:44:56.172145  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:56.172265  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:56.172390  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:44:56.268437  120744 ssh_runner.go:195] Run: systemctl --version
	I0617 10:44:56.274781  120744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 10:44:57.046751  120744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 10:44:57.052913  120744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 10:44:57.052990  120744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 10:44:57.069060  120744 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 10:44:57.069087  120744 start.go:494] detecting cgroup driver to use...
	I0617 10:44:57.069159  120744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 10:44:57.087645  120744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 10:44:57.101481  120744 docker.go:217] disabling cri-docker service (if available) ...
	I0617 10:44:57.101553  120744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 10:44:57.115242  120744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 10:44:57.128420  120744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 10:44:57.253200  120744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 10:44:57.393662  120744 docker.go:233] disabling docker service ...
	I0617 10:44:57.393755  120744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 10:44:57.408156  120744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 10:44:57.421104  120744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 10:44:57.563098  120744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 10:44:57.682462  120744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 10:44:57.696551  120744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 10:44:57.714563  120744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 10:44:57.714625  120744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 10:44:57.724700  120744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 10:44:57.724764  120744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 10:44:57.735224  120744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 10:44:57.745962  120744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 10:44:57.757360  120744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 10:44:57.768601  120744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 10:44:57.779979  120744 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 10:44:57.796928  120744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 10:44:57.807779  120744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 10:44:57.817578  120744 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 10:44:57.817642  120744 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 10:44:57.832079  120744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 10:44:57.841788  120744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 10:44:57.956248  120744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 10:44:58.097349  120744 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 10:44:58.097433  120744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 10:44:58.102260  120744 start.go:562] Will wait 60s for crictl version
	I0617 10:44:58.102312  120744 ssh_runner.go:195] Run: which crictl
	I0617 10:44:58.106040  120744 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 10:44:58.148483  120744 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 10:44:58.148590  120744 ssh_runner.go:195] Run: crio --version
	I0617 10:44:58.176834  120744 ssh_runner.go:195] Run: crio --version
	I0617 10:44:58.205310  120744 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 10:44:58.206553  120744 main.go:141] libmachine: (addons-465706) Calling .GetIP
	I0617 10:44:58.209081  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:58.209439  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:58.209461  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:58.209697  120744 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 10:44:58.213798  120744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 10:44:58.226993  120744 kubeadm.go:877] updating cluster {Name:addons-465706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-465706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 10:44:58.227111  120744 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 10:44:58.227155  120744 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 10:44:58.260394  120744 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 10:44:58.260462  120744 ssh_runner.go:195] Run: which lz4
	I0617 10:44:58.264641  120744 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 10:44:58.268916  120744 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 10:44:58.268958  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0617 10:44:59.558872  120744 crio.go:462] duration metric: took 1.294255889s to copy over tarball
	I0617 10:44:59.558957  120744 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 10:45:01.763271  120744 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.204280076s)
	I0617 10:45:01.763309  120744 crio.go:469] duration metric: took 2.204402067s to extract the tarball
	I0617 10:45:01.763318  120744 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 10:45:01.800889  120744 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 10:45:01.846155  120744 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 10:45:01.846179  120744 cache_images.go:84] Images are preloaded, skipping loading
	I0617 10:45:01.846187  120744 kubeadm.go:928] updating node { 192.168.39.165 8443 v1.30.1 crio true true} ...
	I0617 10:45:01.846322  120744 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-465706 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-465706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 10:45:01.846407  120744 ssh_runner.go:195] Run: crio config
	I0617 10:45:01.889301  120744 cni.go:84] Creating CNI manager for ""
	I0617 10:45:01.889321  120744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 10:45:01.889329  120744 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 10:45:01.889354  120744 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-465706 NodeName:addons-465706 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 10:45:01.889488  120744 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-465706"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 10:45:01.889547  120744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 10:45:01.899332  120744 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 10:45:01.899386  120744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 10:45:01.908503  120744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0617 10:45:01.924576  120744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 10:45:01.939875  120744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0617 10:45:01.955318  120744 ssh_runner.go:195] Run: grep 192.168.39.165	control-plane.minikube.internal$ /etc/hosts
	I0617 10:45:01.958964  120744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.165	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 10:45:01.970306  120744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 10:45:02.078869  120744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 10:45:02.095081  120744 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706 for IP: 192.168.39.165
	I0617 10:45:02.095101  120744 certs.go:194] generating shared ca certs ...
	I0617 10:45:02.095121  120744 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.095269  120744 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 10:45:02.166004  120744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt ...
	I0617 10:45:02.166030  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt: {Name:mk05ceef74d4e62a72ea6e2eabb3e54836b27d2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.166204  120744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key ...
	I0617 10:45:02.166220  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key: {Name:mk11edc4b54cd52f43e67d5f64d42e9343208d3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.166313  120744 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 10:45:02.235082  120744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt ...
	I0617 10:45:02.235106  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt: {Name:mk414317308fd29ba2839574d731c10f47cab583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.235260  120744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key ...
	I0617 10:45:02.235274  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key: {Name:mkb0fa198bc59f19bbe87709d3288e46a91894f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.235359  120744 certs.go:256] generating profile certs ...
	I0617 10:45:02.235416  120744 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.key
	I0617 10:45:02.235431  120744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt with IP's: []
	I0617 10:45:02.481958  120744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt ...
	I0617 10:45:02.481989  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: {Name:mk7c0a709e2e60ab172552160940e7190242fe69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.482144  120744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.key ...
	I0617 10:45:02.482158  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.key: {Name:mkf5662ff64783b6014f1a106cf9b260e3453f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.482220  120744 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.key.5bd6be71
	I0617 10:45:02.482239  120744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.crt.5bd6be71 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165]
	I0617 10:45:02.650228  120744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.crt.5bd6be71 ...
	I0617 10:45:02.650278  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.crt.5bd6be71: {Name:mk42301b652784175eb87b0efaaae0c04bf791cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.650446  120744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.key.5bd6be71 ...
	I0617 10:45:02.650460  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.key.5bd6be71: {Name:mka5edf10fca968b70b46b8f727dbbb6d8d96511 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.650527  120744 certs.go:381] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.crt.5bd6be71 -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.crt
	I0617 10:45:02.650595  120744 certs.go:385] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.key.5bd6be71 -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.key
	I0617 10:45:02.650639  120744 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/proxy-client.key
	I0617 10:45:02.650658  120744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/proxy-client.crt with IP's: []
	I0617 10:45:02.692572  120744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/proxy-client.crt ...
	I0617 10:45:02.692600  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/proxy-client.crt: {Name:mk908d43fb4d6a603e83e03920c7fc46fe3cf47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.692733  120744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/proxy-client.key ...
	I0617 10:45:02.692750  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/proxy-client.key: {Name:mkc706b1e3d2f91e03959ac5236a603305db9e4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.692910  120744 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 10:45:02.692946  120744 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 10:45:02.692969  120744 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 10:45:02.692990  120744 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 10:45:02.693603  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 10:45:02.720826  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 10:45:02.744656  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 10:45:02.772815  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 10:45:02.795374  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0617 10:45:02.819047  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 10:45:02.847757  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 10:45:02.872184  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 10:45:02.896804  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 10:45:02.920621  120744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 10:45:02.937579  120744 ssh_runner.go:195] Run: openssl version
	I0617 10:45:02.943712  120744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 10:45:02.955080  120744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 10:45:02.959807  120744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 10:45:03.079795  120744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 10:45:03.087035  120744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 10:45:03.098957  120744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 10:45:03.103417  120744 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0617 10:45:03.103497  120744 kubeadm.go:391] StartCluster: {Name:addons-465706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-465706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 10:45:03.103576  120744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 10:45:03.103623  120744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 10:45:03.147141  120744 cri.go:89] found id: ""
	I0617 10:45:03.147222  120744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0617 10:45:03.157784  120744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 10:45:03.168596  120744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 10:45:03.180831  120744 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 10:45:03.180855  120744 kubeadm.go:156] found existing configuration files:
	
	I0617 10:45:03.180901  120744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 10:45:03.190069  120744 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 10:45:03.190135  120744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 10:45:03.199654  120744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 10:45:03.209068  120744 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 10:45:03.209117  120744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 10:45:03.218464  120744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 10:45:03.227534  120744 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 10:45:03.227584  120744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 10:45:03.236887  120744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 10:45:03.245812  120744 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 10:45:03.245859  120744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 10:45:03.255195  120744 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 10:45:03.312146  120744 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0617 10:45:03.312238  120744 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 10:45:03.455955  120744 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 10:45:03.456084  120744 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 10:45:03.456197  120744 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 10:45:03.683404  120744 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 10:45:03.883911  120744 out.go:204]   - Generating certificates and keys ...
	I0617 10:45:03.884040  120744 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 10:45:03.884120  120744 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 10:45:03.884222  120744 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0617 10:45:03.913034  120744 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0617 10:45:04.148606  120744 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0617 10:45:04.227813  120744 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0617 10:45:04.293558  120744 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0617 10:45:04.293757  120744 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-465706 localhost] and IPs [192.168.39.165 127.0.0.1 ::1]
	I0617 10:45:04.399986  120744 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0617 10:45:04.400140  120744 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-465706 localhost] and IPs [192.168.39.165 127.0.0.1 ::1]
	I0617 10:45:04.771017  120744 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0617 10:45:04.877740  120744 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0617 10:45:05.021969  120744 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0617 10:45:05.022112  120744 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 10:45:05.396115  120744 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 10:45:05.602440  120744 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0617 10:45:05.745035  120744 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 10:45:05.968827  120744 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 10:45:06.278574  120744 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 10:45:06.279088  120744 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 10:45:06.281441  120744 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 10:45:06.283343  120744 out.go:204]   - Booting up control plane ...
	I0617 10:45:06.283434  120744 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 10:45:06.283509  120744 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 10:45:06.283574  120744 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 10:45:06.297749  120744 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 10:45:06.300613  120744 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 10:45:06.300806  120744 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 10:45:06.424104  120744 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0617 10:45:06.424223  120744 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0617 10:45:07.425125  120744 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001614224s
	I0617 10:45:07.425216  120744 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0617 10:45:12.426804  120744 kubeadm.go:309] [api-check] The API server is healthy after 5.001996276s
	I0617 10:45:12.442714  120744 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0617 10:45:12.454111  120744 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0617 10:45:12.478741  120744 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0617 10:45:12.479001  120744 kubeadm.go:309] [mark-control-plane] Marking the node addons-465706 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0617 10:45:12.489912  120744 kubeadm.go:309] [bootstrap-token] Using token: 9a03xm.uzy79wsae0xvy118
	I0617 10:45:12.491274  120744 out.go:204]   - Configuring RBAC rules ...
	I0617 10:45:12.491362  120744 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0617 10:45:12.499494  120744 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0617 10:45:12.505342  120744 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0617 10:45:12.508264  120744 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0617 10:45:12.511032  120744 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0617 10:45:12.513720  120744 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0617 10:45:12.831896  120744 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0617 10:45:13.266355  120744 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0617 10:45:13.836581  120744 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0617 10:45:13.836617  120744 kubeadm.go:309] 
	I0617 10:45:13.836688  120744 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0617 10:45:13.836702  120744 kubeadm.go:309] 
	I0617 10:45:13.836825  120744 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0617 10:45:13.836835  120744 kubeadm.go:309] 
	I0617 10:45:13.836879  120744 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0617 10:45:13.836980  120744 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0617 10:45:13.837069  120744 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0617 10:45:13.837079  120744 kubeadm.go:309] 
	I0617 10:45:13.837147  120744 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0617 10:45:13.837156  120744 kubeadm.go:309] 
	I0617 10:45:13.837210  120744 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0617 10:45:13.837220  120744 kubeadm.go:309] 
	I0617 10:45:13.837288  120744 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0617 10:45:13.837370  120744 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0617 10:45:13.837464  120744 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0617 10:45:13.837475  120744 kubeadm.go:309] 
	I0617 10:45:13.837576  120744 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0617 10:45:13.837672  120744 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0617 10:45:13.837681  120744 kubeadm.go:309] 
	I0617 10:45:13.837786  120744 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 9a03xm.uzy79wsae0xvy118 \
	I0617 10:45:13.837920  120744 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 \
	I0617 10:45:13.837959  120744 kubeadm.go:309] 	--control-plane 
	I0617 10:45:13.837969  120744 kubeadm.go:309] 
	I0617 10:45:13.838065  120744 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0617 10:45:13.838073  120744 kubeadm.go:309] 
	I0617 10:45:13.838180  120744 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 9a03xm.uzy79wsae0xvy118 \
	I0617 10:45:13.838304  120744 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 
	I0617 10:45:13.838529  120744 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 10:45:13.838601  120744 cni.go:84] Creating CNI manager for ""
	I0617 10:45:13.838618  120744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 10:45:13.840239  120744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 10:45:13.841661  120744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 10:45:13.852360  120744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 10:45:13.870460  120744 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 10:45:13.870520  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:13.870576  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-465706 minikube.k8s.io/updated_at=2024_06_17T10_45_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6 minikube.k8s.io/name=addons-465706 minikube.k8s.io/primary=true
	I0617 10:45:13.980447  120744 ops.go:34] apiserver oom_adj: -16
	I0617 10:45:13.980522  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:14.481462  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:14.980678  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:15.481152  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:15.980649  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:16.481345  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:16.980648  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:17.481591  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:17.980612  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:18.481523  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:18.981455  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:19.481527  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:19.980752  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:20.481015  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:20.981371  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:21.480673  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:21.980816  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:22.481586  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:22.980872  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:23.481189  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:23.980900  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:24.480683  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:24.980975  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:25.481022  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:25.981046  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:26.480991  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:26.981019  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:27.480820  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:27.581312  120744 kubeadm.go:1107] duration metric: took 13.710837272s to wait for elevateKubeSystemPrivileges
	W0617 10:45:27.581373  120744 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0617 10:45:27.581387  120744 kubeadm.go:393] duration metric: took 24.477895643s to StartCluster
	I0617 10:45:27.581413  120744 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:27.581583  120744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 10:45:27.581983  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:27.582247  120744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0617 10:45:27.582261  120744 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 10:45:27.584685  120744 out.go:177] * Verifying Kubernetes components...
	I0617 10:45:27.582339  120744 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0617 10:45:27.582479  120744 config.go:182] Loaded profile config "addons-465706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 10:45:27.585906  120744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 10:45:27.585918  120744 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-465706"
	I0617 10:45:27.585925  120744 addons.go:69] Setting ingress-dns=true in profile "addons-465706"
	I0617 10:45:27.585935  120744 addons.go:69] Setting yakd=true in profile "addons-465706"
	I0617 10:45:27.585957  120744 addons.go:234] Setting addon ingress-dns=true in "addons-465706"
	I0617 10:45:27.585963  120744 addons.go:69] Setting inspektor-gadget=true in profile "addons-465706"
	I0617 10:45:27.585971  120744 addons.go:69] Setting storage-provisioner=true in profile "addons-465706"
	I0617 10:45:27.585982  120744 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-465706"
	I0617 10:45:27.585990  120744 addons.go:69] Setting metrics-server=true in profile "addons-465706"
	I0617 10:45:27.585990  120744 addons.go:69] Setting default-storageclass=true in profile "addons-465706"
	I0617 10:45:27.585985  120744 addons.go:69] Setting gcp-auth=true in profile "addons-465706"
	I0617 10:45:27.586017  120744 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-465706"
	I0617 10:45:27.586020  120744 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-465706"
	I0617 10:45:27.586024  120744 addons.go:69] Setting ingress=true in profile "addons-465706"
	I0617 10:45:27.586027  120744 addons.go:69] Setting volcano=true in profile "addons-465706"
	I0617 10:45:27.586035  120744 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-465706"
	I0617 10:45:27.586041  120744 addons.go:234] Setting addon ingress=true in "addons-465706"
	I0617 10:45:27.585966  120744 addons.go:69] Setting registry=true in profile "addons-465706"
	I0617 10:45:27.586046  120744 addons.go:234] Setting addon volcano=true in "addons-465706"
	I0617 10:45:27.586050  120744 addons.go:69] Setting volumesnapshots=true in profile "addons-465706"
	I0617 10:45:27.586058  120744 addons.go:234] Setting addon registry=true in "addons-465706"
	I0617 10:45:27.586064  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.586067  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.586072  120744 addons.go:234] Setting addon volumesnapshots=true in "addons-465706"
	I0617 10:45:27.586087  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.586096  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.586111  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.585957  120744 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-465706"
	I0617 10:45:27.586213  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.586543  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.586556  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.586037  120744 mustload.go:65] Loading cluster: addons-465706
	I0617 10:45:27.586584  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.586590  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.586610  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.586675  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.586543  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.586721  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.586745  120744 config.go:182] Loaded profile config "addons-465706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 10:45:27.585908  120744 addons.go:69] Setting cloud-spanner=true in profile "addons-465706"
	I0617 10:45:27.586781  120744 addons.go:234] Setting addon cloud-spanner=true in "addons-465706"
	I0617 10:45:27.586012  120744 addons.go:234] Setting addon storage-provisioner=true in "addons-465706"
	I0617 10:45:27.585984  120744 addons.go:234] Setting addon inspektor-gadget=true in "addons-465706"
	I0617 10:45:27.585960  120744 addons.go:234] Setting addon yakd=true in "addons-465706"
	I0617 10:45:27.586012  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.586832  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.586018  120744 addons.go:69] Setting helm-tiller=true in profile "addons-465706"
	I0617 10:45:27.586865  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.586873  120744 addons.go:234] Setting addon helm-tiller=true in "addons-465706"
	I0617 10:45:27.586949  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.587071  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.587071  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.587087  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.587092  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.586013  120744 addons.go:234] Setting addon metrics-server=true in "addons-465706"
	I0617 10:45:27.587165  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.586544  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.587195  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.586541  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.587244  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.587265  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.586036  120744 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-465706"
	I0617 10:45:27.587293  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.587513  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.587532  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.587576  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.587615  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.587633  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.587807  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.587864  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.588656  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.589033  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.589065  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.606392  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0617 10:45:27.606409  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37481
	I0617 10:45:27.606756  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46713
	I0617 10:45:27.606518  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35287
	I0617 10:45:27.607184  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.607374  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.607526  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.607647  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.608153  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.608167  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.608179  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.608184  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.608260  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0617 10:45:27.608260  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.608553  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.608557  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.608744  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.608979  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.609231  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.609306  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.609327  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.609388  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.609431  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.609450  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.609466  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.609605  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.609626  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.609698  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.609912  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.619954  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.619994  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.620409  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.620460  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.621281  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.621329  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.623557  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.623615  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.623724  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.623773  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.619958  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.623808  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.627687  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46595
	I0617 10:45:27.628191  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.628745  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.628765  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.629249  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.629712  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.631582  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.631998  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.632047  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.655714  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43883
	I0617 10:45:27.656298  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.656842  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.656862  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.657305  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.657988  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.658018  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.660351  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I0617 10:45:27.661026  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.661725  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.661743  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.662160  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.662747  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.662773  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.663027  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34767
	I0617 10:45:27.663528  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.663956  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.663971  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.664337  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.664908  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.664951  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.666009  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40531
	I0617 10:45:27.666555  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.667073  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.667097  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.667562  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.667793  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.670988  120744 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-465706"
	I0617 10:45:27.671028  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.671267  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.671301  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.673896  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44161
	I0617 10:45:27.674706  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.674761  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42641
	I0617 10:45:27.675364  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.675382  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.675512  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.675971  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.676063  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.676080  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.676162  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.676494  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.676725  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.679238  120744 addons.go:234] Setting addon default-storageclass=true in "addons-465706"
	I0617 10:45:27.679281  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.679561  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.679821  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40973
	I0617 10:45:27.679966  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:27.679976  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:27.680354  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.680381  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.682261  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32973
	I0617 10:45:27.682391  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43593
	I0617 10:45:27.682494  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.682581  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:27.682611  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:27.682621  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:27.682632  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:27.682640  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:27.683022  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.683120  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:27.683142  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:27.683151  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:27.683223  120744 main.go:141] libmachine: () Calling .GetVersion
	W0617 10:45:27.683239  120744 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0617 10:45:27.683392  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38569
	I0617 10:45:27.683578  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.683603  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.683897  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.683907  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.683918  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.683925  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.683991  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I0617 10:45:27.684110  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.684263  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.684325  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.684518  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.684791  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.684805  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.684904  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.684929  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.685126  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.685157  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.685464  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.685484  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.685533  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42701
	I0617 10:45:27.685869  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.685930  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.685945  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.685946  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40807
	I0617 10:45:27.686064  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.686231  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.686325  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.686372  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.686724  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.686743  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.686880  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.686892  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.687001  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.687038  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.687510  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.687572  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.687698  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.689569  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.689626  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.691713  120744 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0617 10:45:27.690342  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.691224  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.691872  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.692697  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45069
	I0617 10:45:27.692853  120744 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0617 10:45:27.693044  120744 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0617 10:45:27.693066  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.693100  120744 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0617 10:45:27.694292  120744 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0617 10:45:27.695574  120744 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0617 10:45:27.695538  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.695545  120744 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0617 10:45:27.698734  120744 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0617 10:45:27.697297  120744 out.go:177]   - Using image docker.io/registry:2.8.3
	I0617 10:45:27.697362  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.697495  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0617 10:45:27.697506  120744 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0617 10:45:27.698105  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.698536  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.699877  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.700187  120744 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0617 10:45:27.700200  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0617 10:45:27.700217  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.700305  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.700327  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.701317  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.701360  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.701362  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39925
	I0617 10:45:27.702544  120744 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0617 10:45:27.702566  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.703174  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38193
	I0617 10:45:27.703199  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38513
	I0617 10:45:27.704263  120744 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0617 10:45:27.704275  120744 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0617 10:45:27.704296  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.705216  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.706668  120744 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0617 10:45:27.706681  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0617 10:45:27.705530  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.706699  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.705985  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.706028  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.706751  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.708208  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.708196  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.708354  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.708369  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.708505  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.708516  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.708963  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.709035  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.709086  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40993
	I0617 10:45:27.709621  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.709658  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.709709  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.709737  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.709759  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.709800  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.709836  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.710137  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.710158  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.710177  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.710205  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.710447  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.710479  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.710504  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.710669  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.712313  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.712330  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.712346  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.712504  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.712523  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.712645  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.712669  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.712707  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.712722  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.712786  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.713028  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.713067  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.713259  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.713475  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.713695  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.713738  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.713763  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.714276  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.714297  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.714438  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.714457  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.714675  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.714820  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.715221  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.715248  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.715897  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.715932  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.717406  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45003
	I0617 10:45:27.717835  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.717895  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0617 10:45:27.718011  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42685
	I0617 10:45:27.718284  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.718381  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46471
	I0617 10:45:27.718439  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.718514  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.718529  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.718934  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.718952  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.718987  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.719004  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.719024  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.719474  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.719501  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.719651  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.719866  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.720171  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.720366  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.721286  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.721600  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.721966  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.723814  120744 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	I0617 10:45:27.724889  120744 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0617 10:45:27.722983  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.724916  120744 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0617 10:45:27.724937  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.723935  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.724017  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.728391  120744 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 10:45:27.728431  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.729588  120744 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 10:45:27.729603  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 10:45:27.729604  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.729621  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.729627  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.725886  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.729664  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.728467  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40983
	I0617 10:45:27.731477  120744 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0617 10:45:27.729077  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.730608  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.732701  120744 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0617 10:45:27.732715  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0617 10:45:27.732733  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.732883  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.733108  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.733136  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.733322  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.733446  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.733462  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.733993  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.733949  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.734026  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.734276  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.734513  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.735070  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.735261  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.735776  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.736398  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40871
	I0617 10:45:27.736614  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.738113  120744 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0617 10:45:27.736975  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.737031  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.737584  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.739350  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.739371  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.740738  120744 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0617 10:45:27.739557  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.739955  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.741753  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.743089  120744 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0617 10:45:27.741992  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.742358  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.745794  120744 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0617 10:45:27.746894  120744 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0617 10:45:27.744853  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.745029  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.745407  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37519
	I0617 10:45:27.746011  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46855
	I0617 10:45:27.749466  120744 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0617 10:45:27.748431  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.749197  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.749984  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.750429  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41207
	I0617 10:45:27.752071  120744 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0617 10:45:27.751187  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.751245  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.751647  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.753053  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I0617 10:45:27.753363  120744 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0617 10:45:27.754493  120744 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 10:45:27.754508  120744 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 10:45:27.754522  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.753436  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.755945  120744 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0617 10:45:27.753450  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.753751  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.754024  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.754932  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.757110  120744 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0617 10:45:27.757119  120744 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0617 10:45:27.757133  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.757194  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.757286  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.757534  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.757747  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.757766  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.757826  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.758211  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.758214  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.758219  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.758469  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.758490  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.758809  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.758827  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.759004  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.759188  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.759319  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.759486  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.760785  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.760956  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.762385  120744 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0617 10:45:27.761330  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.761468  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.762471  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.763137  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.763500  120744 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0617 10:45:27.764612  120744 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0617 10:45:27.764628  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0617 10:45:27.763579  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.763698  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.764668  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.763743  120744 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 10:45:27.764686  120744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 10:45:27.764700  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.765807  120744 out.go:177]   - Using image docker.io/busybox:stable
	I0617 10:45:27.763938  120744 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0617 10:45:27.764640  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.764880  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.765827  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0617 10:45:27.766933  120744 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0617 10:45:27.766941  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.768081  120744 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0617 10:45:27.767129  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.768093  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0617 10:45:27.768112  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.767578  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.768171  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.768188  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.768215  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.768376  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.768545  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.768691  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.771405  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.771598  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.771781  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.771799  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.772002  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.772024  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.772039  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.772175  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.772190  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.772309  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.772350  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.772493  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.772513  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.772623  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.773521  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.773878  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.773898  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.774151  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.774302  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.774488  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.774594  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	W0617 10:45:27.805981  120744 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52944->192.168.39.165:22: read: connection reset by peer
	I0617 10:45:27.806036  120744 retry.go:31] will retry after 281.511115ms: ssh: handshake failed: read tcp 192.168.39.1:52944->192.168.39.165:22: read: connection reset by peer
	I0617 10:45:28.118610  120744 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0617 10:45:28.118646  120744 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0617 10:45:28.135215  120744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 10:45:28.135236  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0617 10:45:28.139363  120744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 10:45:28.139437  120744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0617 10:45:28.159781  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 10:45:28.174650  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0617 10:45:28.191572  120744 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0617 10:45:28.191602  120744 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0617 10:45:28.193944  120744 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0617 10:45:28.193982  120744 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0617 10:45:28.304689  120744 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0617 10:45:28.304726  120744 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0617 10:45:28.317938  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0617 10:45:28.327836  120744 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0617 10:45:28.327863  120744 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0617 10:45:28.328880  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0617 10:45:28.335771  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 10:45:28.348758  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0617 10:45:28.349860  120744 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0617 10:45:28.349876  120744 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0617 10:45:28.364020  120744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 10:45:28.364043  120744 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 10:45:28.381337  120744 node_ready.go:35] waiting up to 6m0s for node "addons-465706" to be "Ready" ...
	I0617 10:45:28.389046  120744 node_ready.go:49] node "addons-465706" has status "Ready":"True"
	I0617 10:45:28.389073  120744 node_ready.go:38] duration metric: took 7.677571ms for node "addons-465706" to be "Ready" ...
	I0617 10:45:28.389082  120744 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 10:45:28.406187  120744 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9sbdk" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:28.489151  120744 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0617 10:45:28.489175  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0617 10:45:28.507761  120744 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0617 10:45:28.507796  120744 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0617 10:45:28.511907  120744 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0617 10:45:28.511935  120744 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0617 10:45:28.517099  120744 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0617 10:45:28.517124  120744 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0617 10:45:28.531200  120744 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0617 10:45:28.531222  120744 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0617 10:45:28.535806  120744 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0617 10:45:28.535828  120744 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0617 10:45:28.557639  120744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 10:45:28.557663  120744 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 10:45:28.647333  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0617 10:45:28.691269  120744 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0617 10:45:28.691295  120744 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0617 10:45:28.705978  120744 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0617 10:45:28.706005  120744 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0617 10:45:28.741737  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 10:45:28.750823  120744 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0617 10:45:28.750846  120744 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0617 10:45:28.757868  120744 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0617 10:45:28.757894  120744 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0617 10:45:28.759033  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0617 10:45:28.761378  120744 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0617 10:45:28.761408  120744 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0617 10:45:28.868415  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0617 10:45:28.881735  120744 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0617 10:45:28.881771  120744 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0617 10:45:28.898382  120744 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0617 10:45:28.898409  120744 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0617 10:45:28.930274  120744 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0617 10:45:28.930295  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0617 10:45:28.949120  120744 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0617 10:45:28.949155  120744 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0617 10:45:29.087570  120744 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0617 10:45:29.087603  120744 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0617 10:45:29.099746  120744 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0617 10:45:29.099769  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0617 10:45:29.311191  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0617 10:45:29.375401  120744 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0617 10:45:29.375436  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0617 10:45:29.461092  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0617 10:45:29.497969  120744 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0617 10:45:29.498010  120744 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0617 10:45:29.688595  120744 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0617 10:45:29.688629  120744 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0617 10:45:29.834080  120744 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0617 10:45:29.834104  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0617 10:45:29.974245  120744 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0617 10:45:29.974277  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0617 10:45:30.074855  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0617 10:45:30.299271  120744 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0617 10:45:30.299299  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0617 10:45:30.413329  120744 pod_ready.go:102] pod "coredns-7db6d8ff4d-9sbdk" in "kube-system" namespace has status "Ready":"False"
	I0617 10:45:30.456893  120744 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.317416749s)
	I0617 10:45:30.456940  120744 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0617 10:45:30.456956  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.297132023s)
	I0617 10:45:30.457028  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:30.457047  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:30.457324  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:30.457343  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:30.457358  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:30.457367  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:30.457619  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:30.457625  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:30.457655  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:30.466868  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:30.466883  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:30.467164  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:30.467209  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:30.467221  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:30.590558  120744 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0617 10:45:30.590586  120744 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0617 10:45:30.978275  120744 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-465706" context rescaled to 1 replicas
	I0617 10:45:30.985515  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0617 10:45:32.598810  120744 pod_ready.go:102] pod "coredns-7db6d8ff4d-9sbdk" in "kube-system" namespace has status "Ready":"False"
	I0617 10:45:34.910815  120744 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0617 10:45:34.910867  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:34.914216  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:34.914693  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:34.914720  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:34.914964  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:34.915208  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:34.915405  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:34.915617  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:35.066955  120744 pod_ready.go:102] pod "coredns-7db6d8ff4d-9sbdk" in "kube-system" namespace has status "Ready":"False"
	I0617 10:45:35.611062  120744 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0617 10:45:35.854850  120744 addons.go:234] Setting addon gcp-auth=true in "addons-465706"
	I0617 10:45:35.854922  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:35.855245  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:35.855275  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:35.870956  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44387
	I0617 10:45:35.871495  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:35.871992  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:35.872019  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:35.872370  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:35.872949  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:35.872985  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:35.889103  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33555
	I0617 10:45:35.889635  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:35.890135  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:35.890162  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:35.890548  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:35.890746  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:35.892256  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:35.892528  120744 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0617 10:45:35.892554  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:35.895733  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:35.896137  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:35.896163  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:35.896341  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:35.896547  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:35.896714  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:35.896866  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:35.913304  120744 pod_ready.go:92] pod "coredns-7db6d8ff4d-9sbdk" in "kube-system" namespace has status "Ready":"True"
	I0617 10:45:35.913330  120744 pod_ready.go:81] duration metric: took 7.507114128s for pod "coredns-7db6d8ff4d-9sbdk" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:35.913344  120744 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mdcv2" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.232000  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.057299612s)
	I0617 10:45:36.232054  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.914065757s)
	I0617 10:45:36.232065  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.903158489s)
	I0617 10:45:36.232098  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232061  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232110  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232118  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232135  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.896345286s)
	I0617 10:45:36.232159  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232167  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232097  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232184  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232233  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.88344245s)
	I0617 10:45:36.232293  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232310  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232363  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.490599897s)
	I0617 10:45:36.232395  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232407  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232421  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.232450  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.232458  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.232465  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232472  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232489  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.232512  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.232517  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.473423608s)
	I0617 10:45:36.232534  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232536  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.232544  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232559  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.232567  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232571  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.364116134s)
	I0617 10:45:36.232593  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232603  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232623  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.9213945s)
	I0617 10:45:36.232642  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232574  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232650  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232701  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.232709  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.232772  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.771637568s)
	W0617 10:45:36.232803  120744 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0617 10:45:36.232846  120744 retry.go:31] will retry after 340.601669ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0617 10:45:36.232927  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.158035947s)
	I0617 10:45:36.232946  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232954  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.233023  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.233039  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.233060  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.233069  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.233076  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.233082  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.233124  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.233130  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.233138  120744 addons.go:475] Verifying addon ingress=true in "addons-465706"
	I0617 10:45:36.236450  120744 out.go:177] * Verifying ingress addon...
	I0617 10:45:36.233776  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.233806  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.237923  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.233829  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.233846  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.233863  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.233879  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.233895  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.233944  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.232293  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.584929122s)
	I0617 10:45:36.235205  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.235239  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.236139  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.236168  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.236197  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.236214  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.236725  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.238005  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.238012  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.237999  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.238030  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.239508  120744 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-465706 service yakd-dashboard -n yakd-dashboard
	
	I0617 10:45:36.238061  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.238070  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.238075  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.238081  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.238087  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.238020  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.238088  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.238259  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.238292  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.238886  120744 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0617 10:45:36.240641  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.240661  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.240673  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.240699  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.240730  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.240739  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.240745  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.240756  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.240712  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.240783  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.240783  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.240794  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.241260  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.241262  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.241263  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.241272  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.241275  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.241285  120744 addons.go:475] Verifying addon metrics-server=true in "addons-465706"
	I0617 10:45:36.241299  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.241302  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.241305  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.241309  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.242753  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.242763  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.242770  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.242780  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.242795  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.242804  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.242806  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.242808  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.242812  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.242813  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.242822  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.242827  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.242833  120744 addons.go:475] Verifying addon registry=true in "addons-465706"
	I0617 10:45:36.244386  120744 out.go:177] * Verifying registry addon...
	I0617 10:45:36.243022  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.243025  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.243046  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.245678  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.246291  120744 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0617 10:45:36.303634  120744 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0617 10:45:36.303666  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:36.321826  120744 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0617 10:45:36.321865  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:36.338983  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.339003  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.339367  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.339393  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.339423  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.423176  120744 pod_ready.go:92] pod "coredns-7db6d8ff4d-mdcv2" in "kube-system" namespace has status "Ready":"True"
	I0617 10:45:36.423213  120744 pod_ready.go:81] duration metric: took 509.860574ms for pod "coredns-7db6d8ff4d-mdcv2" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.423227  120744 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-465706" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.428836  120744 pod_ready.go:92] pod "etcd-addons-465706" in "kube-system" namespace has status "Ready":"True"
	I0617 10:45:36.428867  120744 pod_ready.go:81] duration metric: took 5.631061ms for pod "etcd-addons-465706" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.428879  120744 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-465706" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.443592  120744 pod_ready.go:92] pod "kube-apiserver-addons-465706" in "kube-system" namespace has status "Ready":"True"
	I0617 10:45:36.443616  120744 pod_ready.go:81] duration metric: took 14.728663ms for pod "kube-apiserver-addons-465706" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.443626  120744 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-465706" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.448486  120744 pod_ready.go:92] pod "kube-controller-manager-addons-465706" in "kube-system" namespace has status "Ready":"True"
	I0617 10:45:36.448507  120744 pod_ready.go:81] duration metric: took 4.875331ms for pod "kube-controller-manager-addons-465706" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.448516  120744 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v55ch" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.574047  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0617 10:45:36.712751  120744 pod_ready.go:92] pod "kube-proxy-v55ch" in "kube-system" namespace has status "Ready":"True"
	I0617 10:45:36.712786  120744 pod_ready.go:81] duration metric: took 264.263656ms for pod "kube-proxy-v55ch" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.712797  120744 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-465706" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.745241  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:36.762725  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:37.110234  120744 pod_ready.go:92] pod "kube-scheduler-addons-465706" in "kube-system" namespace has status "Ready":"True"
	I0617 10:45:37.110257  120744 pod_ready.go:81] duration metric: took 397.453725ms for pod "kube-scheduler-addons-465706" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:37.110266  120744 pod_ready.go:38] duration metric: took 8.721173099s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 10:45:37.110280  120744 api_server.go:52] waiting for apiserver process to appear ...
	I0617 10:45:37.110332  120744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 10:45:37.245281  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:37.251193  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:37.754537  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:37.755234  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:38.244321  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:38.258185  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:38.780940  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:38.817615  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:38.906171  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.920591698s)
	I0617 10:45:38.906185  120744 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.013630139s)
	I0617 10:45:38.906245  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:38.906260  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:38.907853  120744 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0617 10:45:38.906590  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:38.906637  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:38.908973  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:38.908986  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:38.908997  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:38.910168  120744 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0617 10:45:38.909280  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:38.909305  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:38.910215  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:38.910227  120744 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-465706"
	I0617 10:45:38.911367  120744 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0617 10:45:38.911383  120744 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0617 10:45:38.912563  120744 out.go:177] * Verifying csi-hostpath-driver addon...
	I0617 10:45:38.914185  120744 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0617 10:45:38.945943  120744 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0617 10:45:38.945969  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:39.012501  120744 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0617 10:45:39.012530  120744 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0617 10:45:39.201197  120744 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0617 10:45:39.201222  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0617 10:45:39.248211  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:39.252907  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:39.278358  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0617 10:45:39.306513  120744 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.196154115s)
	I0617 10:45:39.306556  120744 api_server.go:72] duration metric: took 11.724263153s to wait for apiserver process to appear ...
	I0617 10:45:39.306563  120744 api_server.go:88] waiting for apiserver healthz status ...
	I0617 10:45:39.306586  120744 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0617 10:45:39.306512  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.732412459s)
	I0617 10:45:39.306681  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:39.306696  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:39.307105  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:39.307144  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:39.307152  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:39.307162  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:39.307173  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:39.307500  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:39.307520  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:39.311676  120744 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I0617 10:45:39.312607  120744 api_server.go:141] control plane version: v1.30.1
	I0617 10:45:39.312638  120744 api_server.go:131] duration metric: took 6.068088ms to wait for apiserver health ...
	I0617 10:45:39.312646  120744 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 10:45:39.321129  120744 system_pods.go:59] 19 kube-system pods found
	I0617 10:45:39.321160  120744 system_pods.go:61] "coredns-7db6d8ff4d-9sbdk" [9dced1c6-3ebc-46f8-8333-f4d8ba492a28] Running
	I0617 10:45:39.321165  120744 system_pods.go:61] "coredns-7db6d8ff4d-mdcv2" [0a081c8c-6add-484d-8269-47fd5e1bfad4] Running
	I0617 10:45:39.321172  120744 system_pods.go:61] "csi-hostpath-attacher-0" [c3a12dde-1859-4807-90f3-4e9f15f0acee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0617 10:45:39.321179  120744 system_pods.go:61] "csi-hostpath-resizer-0" [c4f5227e-f05e-4caa-a70c-c6fa84a8e6f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0617 10:45:39.321186  120744 system_pods.go:61] "csi-hostpathplugin-2wtdq" [704705e9-4f4b-4176-be37-424df07e8f4a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0617 10:45:39.321190  120744 system_pods.go:61] "etcd-addons-465706" [04a0e18a-dd05-4e7c-a759-841095eaaab2] Running
	I0617 10:45:39.321195  120744 system_pods.go:61] "kube-apiserver-addons-465706" [667d8e02-7848-48e4-af03-2de8bc5c658a] Running
	I0617 10:45:39.321198  120744 system_pods.go:61] "kube-controller-manager-addons-465706" [9b7a2d70-e3bf-4427-b759-e638e9c8a6de] Running
	I0617 10:45:39.321205  120744 system_pods.go:61] "kube-ingress-dns-minikube" [5887752c-36aa-4a81-a049-587806fdceb7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0617 10:45:39.321211  120744 system_pods.go:61] "kube-proxy-v55ch" [fc268acf-6fc2-47f0-8a27-3909125a82fc] Running
	I0617 10:45:39.321216  120744 system_pods.go:61] "kube-scheduler-addons-465706" [11083503-dd02-46b8-a0fc-57a28057acaa] Running
	I0617 10:45:39.321223  120744 system_pods.go:61] "metrics-server-c59844bb4-n7wsl" [9cffe86c-6fa6-4955-a42c-234714e1bd11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 10:45:39.321230  120744 system_pods.go:61] "nvidia-device-plugin-daemonset-qmfbl" [6fa18993-49a4-4224-9ae5-23eebbfb150c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0617 10:45:39.321241  120744 system_pods.go:61] "registry-proxy-8jk6d" [8e3ec5f6-818e-4deb-a7b8-8c6c898c12a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0617 10:45:39.321250  120744 system_pods.go:61] "registry-zmgvf" [779a673e-bb16-4cb8-ba45-1f77abb09f84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0617 10:45:39.321263  120744 system_pods.go:61] "snapshot-controller-745499f584-s86dn" [597fa742-0125-4713-8630-8191b4941bb0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0617 10:45:39.321270  120744 system_pods.go:61] "snapshot-controller-745499f584-vl64l" [3b353623-6a33-4171-b47b-f89dbd7a4a9d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0617 10:45:39.321274  120744 system_pods.go:61] "storage-provisioner" [732fd3d9-47fc-45cb-a823-c926365c9ea0] Running
	I0617 10:45:39.321279  120744 system_pods.go:61] "tiller-deploy-6677d64bcd-c55qr" [b7ac1365-80b4-4f6b-956f-9c3579810596] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0617 10:45:39.321285  120744 system_pods.go:74] duration metric: took 8.634781ms to wait for pod list to return data ...
	I0617 10:45:39.321296  120744 default_sa.go:34] waiting for default service account to be created ...
	I0617 10:45:39.323853  120744 default_sa.go:45] found service account: "default"
	I0617 10:45:39.323876  120744 default_sa.go:55] duration metric: took 2.573824ms for default service account to be created ...
	I0617 10:45:39.323884  120744 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 10:45:39.332547  120744 system_pods.go:86] 19 kube-system pods found
	I0617 10:45:39.332570  120744 system_pods.go:89] "coredns-7db6d8ff4d-9sbdk" [9dced1c6-3ebc-46f8-8333-f4d8ba492a28] Running
	I0617 10:45:39.332575  120744 system_pods.go:89] "coredns-7db6d8ff4d-mdcv2" [0a081c8c-6add-484d-8269-47fd5e1bfad4] Running
	I0617 10:45:39.332583  120744 system_pods.go:89] "csi-hostpath-attacher-0" [c3a12dde-1859-4807-90f3-4e9f15f0acee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0617 10:45:39.332591  120744 system_pods.go:89] "csi-hostpath-resizer-0" [c4f5227e-f05e-4caa-a70c-c6fa84a8e6f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0617 10:45:39.332599  120744 system_pods.go:89] "csi-hostpathplugin-2wtdq" [704705e9-4f4b-4176-be37-424df07e8f4a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0617 10:45:39.332604  120744 system_pods.go:89] "etcd-addons-465706" [04a0e18a-dd05-4e7c-a759-841095eaaab2] Running
	I0617 10:45:39.332608  120744 system_pods.go:89] "kube-apiserver-addons-465706" [667d8e02-7848-48e4-af03-2de8bc5c658a] Running
	I0617 10:45:39.332613  120744 system_pods.go:89] "kube-controller-manager-addons-465706" [9b7a2d70-e3bf-4427-b759-e638e9c8a6de] Running
	I0617 10:45:39.332622  120744 system_pods.go:89] "kube-ingress-dns-minikube" [5887752c-36aa-4a81-a049-587806fdceb7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0617 10:45:39.332631  120744 system_pods.go:89] "kube-proxy-v55ch" [fc268acf-6fc2-47f0-8a27-3909125a82fc] Running
	I0617 10:45:39.332639  120744 system_pods.go:89] "kube-scheduler-addons-465706" [11083503-dd02-46b8-a0fc-57a28057acaa] Running
	I0617 10:45:39.332651  120744 system_pods.go:89] "metrics-server-c59844bb4-n7wsl" [9cffe86c-6fa6-4955-a42c-234714e1bd11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 10:45:39.332660  120744 system_pods.go:89] "nvidia-device-plugin-daemonset-qmfbl" [6fa18993-49a4-4224-9ae5-23eebbfb150c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0617 10:45:39.332668  120744 system_pods.go:89] "registry-proxy-8jk6d" [8e3ec5f6-818e-4deb-a7b8-8c6c898c12a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0617 10:45:39.332676  120744 system_pods.go:89] "registry-zmgvf" [779a673e-bb16-4cb8-ba45-1f77abb09f84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0617 10:45:39.332683  120744 system_pods.go:89] "snapshot-controller-745499f584-s86dn" [597fa742-0125-4713-8630-8191b4941bb0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0617 10:45:39.332692  120744 system_pods.go:89] "snapshot-controller-745499f584-vl64l" [3b353623-6a33-4171-b47b-f89dbd7a4a9d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0617 10:45:39.332697  120744 system_pods.go:89] "storage-provisioner" [732fd3d9-47fc-45cb-a823-c926365c9ea0] Running
	I0617 10:45:39.332704  120744 system_pods.go:89] "tiller-deploy-6677d64bcd-c55qr" [b7ac1365-80b4-4f6b-956f-9c3579810596] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0617 10:45:39.332711  120744 system_pods.go:126] duration metric: took 8.821766ms to wait for k8s-apps to be running ...
	I0617 10:45:39.332722  120744 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 10:45:39.332773  120744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 10:45:39.419796  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:39.746135  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:39.750925  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:39.930505  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:40.254218  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:40.255151  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:40.435816  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:40.578449  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.300044739s)
	I0617 10:45:40.578496  120744 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.245685483s)
	I0617 10:45:40.578518  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:40.578539  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:40.578525  120744 system_svc.go:56] duration metric: took 1.245798929s WaitForService to wait for kubelet
	I0617 10:45:40.578620  120744 kubeadm.go:576] duration metric: took 12.996323814s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 10:45:40.578646  120744 node_conditions.go:102] verifying NodePressure condition ...
	I0617 10:45:40.578875  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:40.578878  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:40.578897  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:40.578907  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:40.578915  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:40.579153  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:40.579169  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:40.579195  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:40.580467  120744 addons.go:475] Verifying addon gcp-auth=true in "addons-465706"
	I0617 10:45:40.582713  120744 out.go:177] * Verifying gcp-auth addon...
	I0617 10:45:40.584535  120744 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0617 10:45:40.595043  120744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 10:45:40.595067  120744 node_conditions.go:123] node cpu capacity is 2
	I0617 10:45:40.595080  120744 node_conditions.go:105] duration metric: took 16.423895ms to run NodePressure ...
	I0617 10:45:40.595091  120744 start.go:240] waiting for startup goroutines ...
	I0617 10:45:40.596381  120744 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0617 10:45:40.596400  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:40.746996  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:40.755391  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:40.919576  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:41.087986  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:41.245478  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:41.251010  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:41.419710  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:41.588018  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:41.745268  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:41.750599  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:41.919428  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:42.087959  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:42.244817  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:42.251030  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:42.420594  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:42.588722  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:42.745185  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:42.750963  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:42.919082  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:43.090420  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:43.245297  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:43.250963  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:43.420525  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:43.587881  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:43.744984  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:43.750459  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:43.920814  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:44.088150  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:44.245149  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:44.250553  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:44.420541  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:44.588154  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:44.745642  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:44.751385  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:44.920686  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:45.088757  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:45.246088  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:45.251808  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:45.424788  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:45.588619  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:45.745728  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:45.750648  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:45.927830  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:46.089022  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:46.245460  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:46.250764  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:46.419377  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:46.588713  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:46.745652  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:46.751559  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:46.922003  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:47.089443  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:47.246262  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:47.250134  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:47.420986  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:47.588394  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:47.745276  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:47.749743  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:47.920247  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:48.088415  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:48.245452  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:48.251399  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:48.420906  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:48.588829  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:48.745221  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:48.750315  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:48.920406  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:49.090129  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:49.245217  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:49.251068  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:49.419909  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:49.588516  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:49.745848  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:49.751641  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:49.920322  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:50.088549  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:50.247849  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:50.254279  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:50.420557  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:50.589023  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:50.745144  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:50.750538  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:50.921903  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:51.089143  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:51.244977  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:51.250514  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:51.420811  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:51.588053  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:51.745395  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:51.750617  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:51.919711  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:52.088848  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:52.246287  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:52.251099  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:52.420829  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:52.590439  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:52.745929  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:52.751771  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:52.919911  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:53.088664  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:53.246024  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:53.251207  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:53.420242  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:53.588439  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:53.751182  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:53.759432  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:53.921029  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:54.089264  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:54.245457  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:54.253983  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:54.420570  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:54.588387  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:54.745665  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:54.751413  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:54.920908  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:55.089573  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:55.245817  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:55.251478  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:55.420671  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:55.588784  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:55.746608  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:55.750569  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:55.919998  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:56.088655  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:56.245603  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:56.250692  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:56.419965  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:56.588660  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:56.745792  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:56.750584  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:56.919372  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:57.090670  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:57.848522  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:57.849837  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:57.862149  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:57.866170  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:57.866988  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:57.868506  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:57.920262  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:58.088215  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:58.246083  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:58.253074  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:58.420371  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:58.588698  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:58.745704  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:58.750355  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:58.920355  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:59.088646  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:59.245804  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:59.251436  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:59.420706  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:59.588185  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:59.746100  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:59.750272  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:59.924088  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:00.088791  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:00.245960  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:00.251206  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:00.419768  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:00.588311  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:00.745284  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:00.753881  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:00.920226  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:01.088812  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:01.245602  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:01.252054  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:01.420388  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:01.590292  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:01.745413  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:01.751873  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:01.921281  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:02.088667  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:02.246543  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:02.251106  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:02.420244  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:02.587711  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:02.745536  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:02.750817  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:02.919799  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:03.087774  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:03.245252  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:03.250034  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:03.420348  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:03.588045  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:03.745617  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:03.750574  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:03.927298  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:04.088947  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:04.245469  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:04.250305  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:04.420741  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:04.592162  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:04.745181  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:04.760134  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:04.920084  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:05.087970  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:05.244622  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:05.250511  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:05.423174  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:05.589121  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:05.745126  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:05.750225  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:05.919857  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:06.088491  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:06.245375  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:06.250340  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:06.420117  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:06.588987  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:06.744969  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:06.749848  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:06.919999  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:07.088297  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:07.245449  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:07.250308  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:07.419928  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:07.589552  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:07.746091  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:07.752463  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:08.231623  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:08.232835  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:08.244588  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:08.254257  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:08.420572  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:08.588312  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:08.745327  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:08.755955  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:08.919716  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:09.088468  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:09.245377  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:09.250822  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:09.419585  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:09.588230  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:09.745055  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:09.750303  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:09.923403  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:10.087479  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:10.244830  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:10.250737  120744 kapi.go:107] duration metric: took 34.004446177s to wait for kubernetes.io/minikube-addons=registry ...
	I0617 10:46:10.419994  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:10.588853  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:10.745525  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:10.921289  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:11.089076  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:11.245480  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:11.419897  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:11.588382  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:11.745388  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:11.922349  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:12.087776  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:12.630692  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:12.636814  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:12.637085  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:12.747530  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:12.920820  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:13.087992  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:13.244794  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:13.419442  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:13.588560  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:13.746067  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:13.920415  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:14.089723  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:14.245805  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:14.419969  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:14.588590  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:14.747017  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:14.919733  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:15.088898  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:15.244844  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:15.419237  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:15.587483  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:15.745989  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:15.919779  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:16.088654  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:16.245412  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:16.419775  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:16.588134  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:16.744836  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:16.919867  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:17.088504  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:17.245964  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:17.419026  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:17.588669  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:17.745275  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:17.919566  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:18.087286  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:18.245168  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:18.420696  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:18.589271  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:18.745362  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:18.921633  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:19.088407  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:19.262387  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:19.419611  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:19.588809  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:19.746005  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:19.920501  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:20.088242  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:20.244956  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:20.419686  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:20.589396  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:20.876480  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:20.920398  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:21.088331  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:21.245534  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:21.420472  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:21.592559  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:21.747539  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:21.920941  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:22.089005  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:22.246015  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:22.422077  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:22.590785  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:22.745427  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:22.920171  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:23.345785  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:23.347280  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:23.421981  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:23.588961  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:23.745460  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:23.920545  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:24.088638  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:24.257643  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:24.427384  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:24.588194  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:24.748151  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:24.921407  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:25.088462  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:25.246244  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:25.425359  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:25.588132  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:25.745051  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:25.919707  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:26.088137  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:26.245358  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:26.420166  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:26.589119  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:26.745001  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:26.919864  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:27.088670  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:27.249016  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:27.419178  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:27.590250  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:27.749646  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:27.933660  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:28.094384  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:28.251997  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:28.424586  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:28.588620  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:28.745524  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:28.926876  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:29.088683  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:29.245842  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:29.420773  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:29.588435  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:29.745528  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:29.920940  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:30.088834  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:30.245568  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:30.421345  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:30.589086  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:30.745589  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:30.920357  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:31.088262  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:31.248579  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:31.420722  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:31.588740  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:31.746274  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:31.920477  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:32.088760  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:32.245808  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:32.420075  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:32.588184  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:32.745770  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:32.920379  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:33.088306  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:33.245709  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:33.422929  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:33.595155  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:33.744674  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:33.921593  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:34.089262  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:34.246893  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:34.420381  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:34.588460  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:34.746064  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:35.347641  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:35.350853  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:35.353301  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:35.421345  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:35.590070  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:35.744834  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:35.919511  120744 kapi.go:107] duration metric: took 57.005322761s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0617 10:46:36.087862  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:36.244825  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:36.589150  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:36.744917  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:37.088430  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:37.245717  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:37.588088  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:37.745089  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:38.089396  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:38.245340  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:38.588588  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:38.745762  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:39.088272  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:39.245882  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:39.588947  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:39.746677  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:40.089050  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:40.245570  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:40.588509  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:40.745712  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:41.088722  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:41.251549  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:41.587963  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:41.745572  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:42.089545  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:42.259500  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:42.590132  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:42.748285  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:43.088288  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:43.245256  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:43.588255  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:43.745048  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:44.089641  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:44.246075  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:44.589470  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:44.745486  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:45.088580  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:45.246438  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:45.588579  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:45.746437  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:46.088005  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:46.244851  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:46.589091  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:46.745012  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:47.089085  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:47.245622  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:47.918107  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:47.918646  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:48.089531  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:48.249038  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:48.588642  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:48.745385  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:49.095036  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:49.245756  120744 kapi.go:107] duration metric: took 1m13.006866721s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0617 10:46:49.589315  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:50.089054  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:50.589897  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:51.089813  120744 kapi.go:107] duration metric: took 1m10.50527324s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0617 10:46:51.091333  120744 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-465706 cluster.
	I0617 10:46:51.092647  120744 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0617 10:46:51.093861  120744 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0617 10:46:51.095185  120744 out.go:177] * Enabled addons: default-storageclass, cloud-spanner, yakd, metrics-server, inspektor-gadget, storage-provisioner, ingress-dns, nvidia-device-plugin, helm-tiller, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0617 10:46:51.096494  120744 addons.go:510] duration metric: took 1m23.514153884s for enable addons: enabled=[default-storageclass cloud-spanner yakd metrics-server inspektor-gadget storage-provisioner ingress-dns nvidia-device-plugin helm-tiller storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0617 10:46:51.096557  120744 start.go:245] waiting for cluster config update ...
	I0617 10:46:51.096585  120744 start.go:254] writing updated cluster config ...
	I0617 10:46:51.096864  120744 ssh_runner.go:195] Run: rm -f paused
	I0617 10:46:51.153852  120744 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 10:46:51.155674  120744 out.go:177] * Done! kubectl is now configured to use "addons-465706" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.330217786Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718621379330194499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584717,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ed55f14-59f0-4fe6-8116-e7a145bb5838 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.330813962Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06bd5a6c-4c3b-4751-b4b9-72c1f71e822a name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.330867693Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06bd5a6c-4c3b-4751-b4b9-72c1f71e822a name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.332047391Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7a40997f28deb0b9d10a18d6e0c7e688e0554d4a98815ae3feb8d5bb5af3cc,PodSandboxId:4f8b12de8ce5e47fea9ac59517ae2b82d235a5fa5a76daa6c220b3a0ea2da03c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718621370929279385,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-xb8zr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5e753cb-3461-4aa7-bf40-adb3f9b66766,},Annotations:map[string]string{io.kubernetes.container.hash: 2b750b2,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6693897cd633c5e476e3fd54f5e9b7f9f1269b19498f5326850dca97491457e,PodSandboxId:4777dab526a939281ee0b1b52bbdb623bfb0aa653f230ac78432661fd7fde11d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718621233103741289,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83bd573f-7cbc-4b39-a885-d2024b2fb1f1,},Annotations:map[string]string{io.kuberne
tes.container.hash: e078ea50,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842ae954918aa02a862aab751b1f0640b768c714cea915e49f47098fe8a23a19,PodSandboxId:9f883f3d665349c1ab9bffa09b7876d500563d48d88cd56b7f8c444bc170b3c0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718621230720726905,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-b25bd,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 426684ff-406b-40d7-a06f-5aab3179e257,},Annotations:map[string]string{io.kubernetes.container.hash: ca1e2563,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb02f1a32711f02bfa7db92ba295caa4c8d9d29515048c64d2de9e327609872,PodSandboxId:95b80d384248f070c9810fbe50f625238bf4c791081e65f75c436cac01df0981,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718621210183187820,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5dp97,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e3d518e3-abec-4d34-be04-6f0340b7a9df,},Annotations:map[string]string{io.kubernetes.container.hash: 6361f7db,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae66f61d519ad92c227ed1b1c7188404acf2183222e584e0da4aa8bf02cba66e,PodSandboxId:faace30a5f6a6afe99cd18973e609262338bca3603ef950648b1aef6638f9207,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1718621191128545271,Labels:map[string
]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-kdhmg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 230a0f87-4965-4d7a-b368-11afefb6dec0,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee316a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed31360eee8f1ce4a6a26e862ae9d57a7be3e1813fd2124ed07b9983809c786,PodSandboxId:ef685aec9b51abe2689e9fc03a88058e4497db66050f909009e25c4e7391f8c0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1
718621181594389643,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-bd4dk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a719b15-34ab-41aa-ba56-5df632aa3334,},Annotations:map[string]string{io.kubernetes.container.hash: f6aece42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716370fa6ca1ba41d9fa95fd920747c901f7fce0c39bd84430da9f862b87ec37,PodSandboxId:cc64f2f39d7fa3d83604d26cd71eb937c19ddaefa6003412c3866dabef912ca5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,C
reatedAt:1718621172733298106,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-phsmj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 744b82c4-03d4-4e46-b250-37034c66f93a,},Annotations:map[string]string{io.kubernetes.container.hash: b436fb08,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d42b67d09bfc2e86be9a45094248a7a443132f92284cad0d34cff31f3978698,PodSandboxId:05df74ef20c0961cfaf19a0f1c656ae3348050a1a1e6a6621b322e26c05f75c7,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718621163940744340,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-n7wsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cffe86c-6fa6-4955-a42c-234714e1bd11,},Annotations:map[string]string{io.kubernetes.container.hash: 83c55851,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ffe2c0522573c6fb44e03297f5ade6ae49c1b346b92c335d0179921042fc45,PodSandboxId:b0190413947277d227cf0dcde0ba284345311e7eb8b3fd12d0d175745f57507d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d6
28db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718621135216023012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732fd3d9-47fc-45cb-a823-c926365c9ea0,},Annotations:map[string]string{io.kubernetes.container.hash: d5f76ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad34891558241a97f15e5950a6c122e58aaff1510e294c94dfd85978567a13c,PodSandboxId:9f901812e713fc1bfb057868942601f39882a33dc2afe8187835638a168546f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674f
b0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718621132850000633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mdcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a081c8c-6add-484d-8269-47fd5e1bfad4,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7acf46,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8182630f40dc3077251c143e1d0843b74fc2f903db0c6bb7de61a50036
51ce42,PodSandboxId:2d37693a5c8de462b0bb438e1c00ced09f46009526fd55cbbda4e539453ad676,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718621130022674714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v55ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc268acf-6fc2-47f0-8a27-3909125a82fc,},Annotations:map[string]string{io.kubernetes.container.hash: ee7efe78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32aaf27877c21f1872f89199888d6e46c7c128e1968884607b91b1ba82c84a09,PodSandboxId:8d190137ae1f0
51d09c68252dfa4b34d9f116032a0b1310c2acaf1ae81d93be3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718621107821861178,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16241773609d976ed01822e798b3e93e,},Annotations:map[string]string{io.kubernetes.container.hash: d7f020bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbbcc46101fca247086c67e958f8de3c1a294b6b24e57f2589442f78e8f1ea91,PodSandboxId:7b8b9405bb9d11bcfbac74d380678286bcd67c39321794eec7e9806ba870
34e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718621107880412179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4380ac408e6ad735f7d32063d2d6cf11,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6981a9b7f93a47089761b31a184fb2e378c9384b3ff6a8fd6b36c028808740f0,PodSandboxId:cc2d5e9dd72320dac79fd5374f234bcbb66571bc5212b0ceb64d08c37fd9953c,Metadata:&Co
ntainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718621107802101954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0726e8bca9e46b8d63d78deadac8845c,},Annotations:map[string]string{io.kubernetes.container.hash: 8ca32538,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2d1cd8b31398e19d08dd55347ee59581d9b378824a6b55badfacfb07bd3e6a3,PodSandboxId:401af38e9ed0d6d613bd2f84e74232be5388d0ffc635f8e8bdf4509a0a33d6c5,Metadata:&ContainerMetadata{N
ame:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718621107809349904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e257f017a334b4976466298131eb526,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=06bd5a6c-4c3b-4751-b4b9-72c1f71e822a name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.374889674Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c0808b95-6719-468a-b479-3ba4d857c48c name=/runtime.v1.RuntimeService/Version
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.375009339Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0808b95-6719-468a-b479-3ba4d857c48c name=/runtime.v1.RuntimeService/Version
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.376220086Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe2e6067-de0f-4e97-bdf8-6f5b5eb8254d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.377613969Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718621379377587187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584717,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe2e6067-de0f-4e97-bdf8-6f5b5eb8254d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.378154040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43d769c8-70af-4af6-b6ce-4aa5f4bfcf04 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.378206429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43d769c8-70af-4af6-b6ce-4aa5f4bfcf04 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.378719072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7a40997f28deb0b9d10a18d6e0c7e688e0554d4a98815ae3feb8d5bb5af3cc,PodSandboxId:4f8b12de8ce5e47fea9ac59517ae2b82d235a5fa5a76daa6c220b3a0ea2da03c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718621370929279385,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-xb8zr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5e753cb-3461-4aa7-bf40-adb3f9b66766,},Annotations:map[string]string{io.kubernetes.container.hash: 2b750b2,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6693897cd633c5e476e3fd54f5e9b7f9f1269b19498f5326850dca97491457e,PodSandboxId:4777dab526a939281ee0b1b52bbdb623bfb0aa653f230ac78432661fd7fde11d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718621233103741289,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83bd573f-7cbc-4b39-a885-d2024b2fb1f1,},Annotations:map[string]string{io.kuberne
tes.container.hash: e078ea50,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842ae954918aa02a862aab751b1f0640b768c714cea915e49f47098fe8a23a19,PodSandboxId:9f883f3d665349c1ab9bffa09b7876d500563d48d88cd56b7f8c444bc170b3c0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718621230720726905,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-b25bd,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 426684ff-406b-40d7-a06f-5aab3179e257,},Annotations:map[string]string{io.kubernetes.container.hash: ca1e2563,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb02f1a32711f02bfa7db92ba295caa4c8d9d29515048c64d2de9e327609872,PodSandboxId:95b80d384248f070c9810fbe50f625238bf4c791081e65f75c436cac01df0981,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718621210183187820,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5dp97,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e3d518e3-abec-4d34-be04-6f0340b7a9df,},Annotations:map[string]string{io.kubernetes.container.hash: 6361f7db,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae66f61d519ad92c227ed1b1c7188404acf2183222e584e0da4aa8bf02cba66e,PodSandboxId:faace30a5f6a6afe99cd18973e609262338bca3603ef950648b1aef6638f9207,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1718621191128545271,Labels:map[string
]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-kdhmg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 230a0f87-4965-4d7a-b368-11afefb6dec0,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee316a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed31360eee8f1ce4a6a26e862ae9d57a7be3e1813fd2124ed07b9983809c786,PodSandboxId:ef685aec9b51abe2689e9fc03a88058e4497db66050f909009e25c4e7391f8c0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1
718621181594389643,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-bd4dk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a719b15-34ab-41aa-ba56-5df632aa3334,},Annotations:map[string]string{io.kubernetes.container.hash: f6aece42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716370fa6ca1ba41d9fa95fd920747c901f7fce0c39bd84430da9f862b87ec37,PodSandboxId:cc64f2f39d7fa3d83604d26cd71eb937c19ddaefa6003412c3866dabef912ca5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,C
reatedAt:1718621172733298106,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-phsmj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 744b82c4-03d4-4e46-b250-37034c66f93a,},Annotations:map[string]string{io.kubernetes.container.hash: b436fb08,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d42b67d09bfc2e86be9a45094248a7a443132f92284cad0d34cff31f3978698,PodSandboxId:05df74ef20c0961cfaf19a0f1c656ae3348050a1a1e6a6621b322e26c05f75c7,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718621163940744340,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-n7wsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cffe86c-6fa6-4955-a42c-234714e1bd11,},Annotations:map[string]string{io.kubernetes.container.hash: 83c55851,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ffe2c0522573c6fb44e03297f5ade6ae49c1b346b92c335d0179921042fc45,PodSandboxId:b0190413947277d227cf0dcde0ba284345311e7eb8b3fd12d0d175745f57507d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d6
28db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718621135216023012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732fd3d9-47fc-45cb-a823-c926365c9ea0,},Annotations:map[string]string{io.kubernetes.container.hash: d5f76ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad34891558241a97f15e5950a6c122e58aaff1510e294c94dfd85978567a13c,PodSandboxId:9f901812e713fc1bfb057868942601f39882a33dc2afe8187835638a168546f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674f
b0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718621132850000633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mdcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a081c8c-6add-484d-8269-47fd5e1bfad4,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7acf46,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8182630f40dc3077251c143e1d0843b74fc2f903db0c6bb7de61a50036
51ce42,PodSandboxId:2d37693a5c8de462b0bb438e1c00ced09f46009526fd55cbbda4e539453ad676,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718621130022674714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v55ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc268acf-6fc2-47f0-8a27-3909125a82fc,},Annotations:map[string]string{io.kubernetes.container.hash: ee7efe78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32aaf27877c21f1872f89199888d6e46c7c128e1968884607b91b1ba82c84a09,PodSandboxId:8d190137ae1f0
51d09c68252dfa4b34d9f116032a0b1310c2acaf1ae81d93be3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718621107821861178,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16241773609d976ed01822e798b3e93e,},Annotations:map[string]string{io.kubernetes.container.hash: d7f020bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbbcc46101fca247086c67e958f8de3c1a294b6b24e57f2589442f78e8f1ea91,PodSandboxId:7b8b9405bb9d11bcfbac74d380678286bcd67c39321794eec7e9806ba870
34e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718621107880412179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4380ac408e6ad735f7d32063d2d6cf11,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6981a9b7f93a47089761b31a184fb2e378c9384b3ff6a8fd6b36c028808740f0,PodSandboxId:cc2d5e9dd72320dac79fd5374f234bcbb66571bc5212b0ceb64d08c37fd9953c,Metadata:&Co
ntainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718621107802101954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0726e8bca9e46b8d63d78deadac8845c,},Annotations:map[string]string{io.kubernetes.container.hash: 8ca32538,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2d1cd8b31398e19d08dd55347ee59581d9b378824a6b55badfacfb07bd3e6a3,PodSandboxId:401af38e9ed0d6d613bd2f84e74232be5388d0ffc635f8e8bdf4509a0a33d6c5,Metadata:&ContainerMetadata{N
ame:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718621107809349904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e257f017a334b4976466298131eb526,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43d769c8-70af-4af6-b6ce-4aa5f4bfcf04 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.411808317Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e564dade-dbfd-4d38-b524-171133dcb37e name=/runtime.v1.RuntimeService/Version
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.411875922Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e564dade-dbfd-4d38-b524-171133dcb37e name=/runtime.v1.RuntimeService/Version
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.412772608Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f43aaf4-d67f-4ed6-8e52-6804b83c962d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.414027486Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718621379414004582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584717,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f43aaf4-d67f-4ed6-8e52-6804b83c962d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.414618196Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88fbc6f5-42a4-4a6b-96f3-4bfcf392404f name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.414695962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88fbc6f5-42a4-4a6b-96f3-4bfcf392404f name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.414992525Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7a40997f28deb0b9d10a18d6e0c7e688e0554d4a98815ae3feb8d5bb5af3cc,PodSandboxId:4f8b12de8ce5e47fea9ac59517ae2b82d235a5fa5a76daa6c220b3a0ea2da03c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718621370929279385,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-xb8zr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5e753cb-3461-4aa7-bf40-adb3f9b66766,},Annotations:map[string]string{io.kubernetes.container.hash: 2b750b2,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6693897cd633c5e476e3fd54f5e9b7f9f1269b19498f5326850dca97491457e,PodSandboxId:4777dab526a939281ee0b1b52bbdb623bfb0aa653f230ac78432661fd7fde11d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718621233103741289,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83bd573f-7cbc-4b39-a885-d2024b2fb1f1,},Annotations:map[string]string{io.kuberne
tes.container.hash: e078ea50,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842ae954918aa02a862aab751b1f0640b768c714cea915e49f47098fe8a23a19,PodSandboxId:9f883f3d665349c1ab9bffa09b7876d500563d48d88cd56b7f8c444bc170b3c0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718621230720726905,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-b25bd,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 426684ff-406b-40d7-a06f-5aab3179e257,},Annotations:map[string]string{io.kubernetes.container.hash: ca1e2563,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb02f1a32711f02bfa7db92ba295caa4c8d9d29515048c64d2de9e327609872,PodSandboxId:95b80d384248f070c9810fbe50f625238bf4c791081e65f75c436cac01df0981,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718621210183187820,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5dp97,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e3d518e3-abec-4d34-be04-6f0340b7a9df,},Annotations:map[string]string{io.kubernetes.container.hash: 6361f7db,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae66f61d519ad92c227ed1b1c7188404acf2183222e584e0da4aa8bf02cba66e,PodSandboxId:faace30a5f6a6afe99cd18973e609262338bca3603ef950648b1aef6638f9207,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1718621191128545271,Labels:map[string
]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-kdhmg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 230a0f87-4965-4d7a-b368-11afefb6dec0,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee316a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed31360eee8f1ce4a6a26e862ae9d57a7be3e1813fd2124ed07b9983809c786,PodSandboxId:ef685aec9b51abe2689e9fc03a88058e4497db66050f909009e25c4e7391f8c0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1
718621181594389643,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-bd4dk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a719b15-34ab-41aa-ba56-5df632aa3334,},Annotations:map[string]string{io.kubernetes.container.hash: f6aece42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716370fa6ca1ba41d9fa95fd920747c901f7fce0c39bd84430da9f862b87ec37,PodSandboxId:cc64f2f39d7fa3d83604d26cd71eb937c19ddaefa6003412c3866dabef912ca5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,C
reatedAt:1718621172733298106,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-phsmj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 744b82c4-03d4-4e46-b250-37034c66f93a,},Annotations:map[string]string{io.kubernetes.container.hash: b436fb08,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d42b67d09bfc2e86be9a45094248a7a443132f92284cad0d34cff31f3978698,PodSandboxId:05df74ef20c0961cfaf19a0f1c656ae3348050a1a1e6a6621b322e26c05f75c7,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718621163940744340,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-n7wsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cffe86c-6fa6-4955-a42c-234714e1bd11,},Annotations:map[string]string{io.kubernetes.container.hash: 83c55851,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ffe2c0522573c6fb44e03297f5ade6ae49c1b346b92c335d0179921042fc45,PodSandboxId:b0190413947277d227cf0dcde0ba284345311e7eb8b3fd12d0d175745f57507d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d6
28db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718621135216023012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732fd3d9-47fc-45cb-a823-c926365c9ea0,},Annotations:map[string]string{io.kubernetes.container.hash: d5f76ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad34891558241a97f15e5950a6c122e58aaff1510e294c94dfd85978567a13c,PodSandboxId:9f901812e713fc1bfb057868942601f39882a33dc2afe8187835638a168546f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674f
b0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718621132850000633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mdcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a081c8c-6add-484d-8269-47fd5e1bfad4,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7acf46,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8182630f40dc3077251c143e1d0843b74fc2f903db0c6bb7de61a50036
51ce42,PodSandboxId:2d37693a5c8de462b0bb438e1c00ced09f46009526fd55cbbda4e539453ad676,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718621130022674714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v55ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc268acf-6fc2-47f0-8a27-3909125a82fc,},Annotations:map[string]string{io.kubernetes.container.hash: ee7efe78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32aaf27877c21f1872f89199888d6e46c7c128e1968884607b91b1ba82c84a09,PodSandboxId:8d190137ae1f0
51d09c68252dfa4b34d9f116032a0b1310c2acaf1ae81d93be3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718621107821861178,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16241773609d976ed01822e798b3e93e,},Annotations:map[string]string{io.kubernetes.container.hash: d7f020bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbbcc46101fca247086c67e958f8de3c1a294b6b24e57f2589442f78e8f1ea91,PodSandboxId:7b8b9405bb9d11bcfbac74d380678286bcd67c39321794eec7e9806ba870
34e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718621107880412179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4380ac408e6ad735f7d32063d2d6cf11,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6981a9b7f93a47089761b31a184fb2e378c9384b3ff6a8fd6b36c028808740f0,PodSandboxId:cc2d5e9dd72320dac79fd5374f234bcbb66571bc5212b0ceb64d08c37fd9953c,Metadata:&Co
ntainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718621107802101954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0726e8bca9e46b8d63d78deadac8845c,},Annotations:map[string]string{io.kubernetes.container.hash: 8ca32538,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2d1cd8b31398e19d08dd55347ee59581d9b378824a6b55badfacfb07bd3e6a3,PodSandboxId:401af38e9ed0d6d613bd2f84e74232be5388d0ffc635f8e8bdf4509a0a33d6c5,Metadata:&ContainerMetadata{N
ame:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718621107809349904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e257f017a334b4976466298131eb526,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88fbc6f5-42a4-4a6b-96f3-4bfcf392404f name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.455594114Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27f99971-ebb3-406d-85a0-06723d90477b name=/runtime.v1.RuntimeService/Version
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.455659188Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27f99971-ebb3-406d-85a0-06723d90477b name=/runtime.v1.RuntimeService/Version
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.456829763Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae2d342f-47f1-4b7d-a4dc-7cc50b641f81 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.457946525Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718621379457916662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584717,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae2d342f-47f1-4b7d-a4dc-7cc50b641f81 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.458384986Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e871ed2b-a0fa-4966-a5cf-4f8ec8760573 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.458514349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e871ed2b-a0fa-4966-a5cf-4f8ec8760573 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:49:39 addons-465706 crio[683]: time="2024-06-17 10:49:39.458811202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7a40997f28deb0b9d10a18d6e0c7e688e0554d4a98815ae3feb8d5bb5af3cc,PodSandboxId:4f8b12de8ce5e47fea9ac59517ae2b82d235a5fa5a76daa6c220b3a0ea2da03c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718621370929279385,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-xb8zr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5e753cb-3461-4aa7-bf40-adb3f9b66766,},Annotations:map[string]string{io.kubernetes.container.hash: 2b750b2,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6693897cd633c5e476e3fd54f5e9b7f9f1269b19498f5326850dca97491457e,PodSandboxId:4777dab526a939281ee0b1b52bbdb623bfb0aa653f230ac78432661fd7fde11d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718621233103741289,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83bd573f-7cbc-4b39-a885-d2024b2fb1f1,},Annotations:map[string]string{io.kuberne
tes.container.hash: e078ea50,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842ae954918aa02a862aab751b1f0640b768c714cea915e49f47098fe8a23a19,PodSandboxId:9f883f3d665349c1ab9bffa09b7876d500563d48d88cd56b7f8c444bc170b3c0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718621230720726905,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-b25bd,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 426684ff-406b-40d7-a06f-5aab3179e257,},Annotations:map[string]string{io.kubernetes.container.hash: ca1e2563,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb02f1a32711f02bfa7db92ba295caa4c8d9d29515048c64d2de9e327609872,PodSandboxId:95b80d384248f070c9810fbe50f625238bf4c791081e65f75c436cac01df0981,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718621210183187820,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5dp97,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e3d518e3-abec-4d34-be04-6f0340b7a9df,},Annotations:map[string]string{io.kubernetes.container.hash: 6361f7db,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae66f61d519ad92c227ed1b1c7188404acf2183222e584e0da4aa8bf02cba66e,PodSandboxId:faace30a5f6a6afe99cd18973e609262338bca3603ef950648b1aef6638f9207,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1718621191128545271,Labels:map[string
]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-kdhmg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 230a0f87-4965-4d7a-b368-11afefb6dec0,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee316a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed31360eee8f1ce4a6a26e862ae9d57a7be3e1813fd2124ed07b9983809c786,PodSandboxId:ef685aec9b51abe2689e9fc03a88058e4497db66050f909009e25c4e7391f8c0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1
718621181594389643,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-bd4dk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a719b15-34ab-41aa-ba56-5df632aa3334,},Annotations:map[string]string{io.kubernetes.container.hash: f6aece42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716370fa6ca1ba41d9fa95fd920747c901f7fce0c39bd84430da9f862b87ec37,PodSandboxId:cc64f2f39d7fa3d83604d26cd71eb937c19ddaefa6003412c3866dabef912ca5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,C
reatedAt:1718621172733298106,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-phsmj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 744b82c4-03d4-4e46-b250-37034c66f93a,},Annotations:map[string]string{io.kubernetes.container.hash: b436fb08,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d42b67d09bfc2e86be9a45094248a7a443132f92284cad0d34cff31f3978698,PodSandboxId:05df74ef20c0961cfaf19a0f1c656ae3348050a1a1e6a6621b322e26c05f75c7,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718621163940744340,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-n7wsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cffe86c-6fa6-4955-a42c-234714e1bd11,},Annotations:map[string]string{io.kubernetes.container.hash: 83c55851,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ffe2c0522573c6fb44e03297f5ade6ae49c1b346b92c335d0179921042fc45,PodSandboxId:b0190413947277d227cf0dcde0ba284345311e7eb8b3fd12d0d175745f57507d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d6
28db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718621135216023012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732fd3d9-47fc-45cb-a823-c926365c9ea0,},Annotations:map[string]string{io.kubernetes.container.hash: d5f76ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad34891558241a97f15e5950a6c122e58aaff1510e294c94dfd85978567a13c,PodSandboxId:9f901812e713fc1bfb057868942601f39882a33dc2afe8187835638a168546f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674f
b0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718621132850000633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mdcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a081c8c-6add-484d-8269-47fd5e1bfad4,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7acf46,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8182630f40dc3077251c143e1d0843b74fc2f903db0c6bb7de61a50036
51ce42,PodSandboxId:2d37693a5c8de462b0bb438e1c00ced09f46009526fd55cbbda4e539453ad676,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718621130022674714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v55ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc268acf-6fc2-47f0-8a27-3909125a82fc,},Annotations:map[string]string{io.kubernetes.container.hash: ee7efe78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32aaf27877c21f1872f89199888d6e46c7c128e1968884607b91b1ba82c84a09,PodSandboxId:8d190137ae1f0
51d09c68252dfa4b34d9f116032a0b1310c2acaf1ae81d93be3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718621107821861178,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16241773609d976ed01822e798b3e93e,},Annotations:map[string]string{io.kubernetes.container.hash: d7f020bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbbcc46101fca247086c67e958f8de3c1a294b6b24e57f2589442f78e8f1ea91,PodSandboxId:7b8b9405bb9d11bcfbac74d380678286bcd67c39321794eec7e9806ba870
34e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718621107880412179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4380ac408e6ad735f7d32063d2d6cf11,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6981a9b7f93a47089761b31a184fb2e378c9384b3ff6a8fd6b36c028808740f0,PodSandboxId:cc2d5e9dd72320dac79fd5374f234bcbb66571bc5212b0ceb64d08c37fd9953c,Metadata:&Co
ntainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718621107802101954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0726e8bca9e46b8d63d78deadac8845c,},Annotations:map[string]string{io.kubernetes.container.hash: 8ca32538,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2d1cd8b31398e19d08dd55347ee59581d9b378824a6b55badfacfb07bd3e6a3,PodSandboxId:401af38e9ed0d6d613bd2f84e74232be5388d0ffc635f8e8bdf4509a0a33d6c5,Metadata:&ContainerMetadata{N
ame:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718621107809349904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e257f017a334b4976466298131eb526,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e871ed2b-a0fa-4966-a5cf-4f8ec8760573 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	db7a40997f28d       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   4f8b12de8ce5e       hello-world-app-86c47465fc-xb8zr
	a6693897cd633       docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa                              2 minutes ago       Running             nginx                     0                   4777dab526a93       nginx
	842ae954918aa       ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5                        2 minutes ago       Running             headlamp                  0                   9f883f3d66534       headlamp-7fc69f7444-b25bd
	ebb02f1a32711       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 2 minutes ago       Running             gcp-auth                  0                   95b80d384248f       gcp-auth-5db96cd9b4-5dp97
	ae66f61d519ad       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             3 minutes ago       Exited              patch                     2                   faace30a5f6a6       ingress-nginx-admission-patch-kdhmg
	3ed31360eee8f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   ef685aec9b51a       ingress-nginx-admission-create-bd4dk
	716370fa6ca1b       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   cc64f2f39d7fa       yakd-dashboard-5ddbf7d777-phsmj
	3d42b67d09bfc       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        3 minutes ago       Running             metrics-server            0                   05df74ef20c09       metrics-server-c59844bb4-n7wsl
	d2ffe2c052257       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   b019041394727       storage-provisioner
	2ad3489155824       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   9f901812e713f       coredns-7db6d8ff4d-mdcv2
	8182630f40dc3       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                                             4 minutes ago       Running             kube-proxy                0                   2d37693a5c8de       kube-proxy-v55ch
	bbbcc46101fca       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                                             4 minutes ago       Running             kube-scheduler            0                   7b8b9405bb9d1       kube-scheduler-addons-465706
	32aaf27877c21       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   8d190137ae1f0       etcd-addons-465706
	a2d1cd8b31398       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                                             4 minutes ago       Running             kube-controller-manager   0                   401af38e9ed0d       kube-controller-manager-addons-465706
	6981a9b7f93a4       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                                             4 minutes ago       Running             kube-apiserver            0                   cc2d5e9dd7232       kube-apiserver-addons-465706
	
	
	==> coredns [2ad34891558241a97f15e5950a6c122e58aaff1510e294c94dfd85978567a13c] <==
	[INFO] 10.244.0.7:57939 - 65457 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000186556s
	[INFO] 10.244.0.7:59607 - 5002 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000099123s
	[INFO] 10.244.0.7:59607 - 13192 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000097252s
	[INFO] 10.244.0.7:50990 - 52499 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000939s
	[INFO] 10.244.0.7:50990 - 11293 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00014566s
	[INFO] 10.244.0.7:59121 - 52178 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149737s
	[INFO] 10.244.0.7:59121 - 7916 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00018693s
	[INFO] 10.244.0.7:54503 - 54261 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000072943s
	[INFO] 10.244.0.7:54503 - 43248 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000047584s
	[INFO] 10.244.0.7:59875 - 43432 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000056005s
	[INFO] 10.244.0.7:59875 - 1962 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000026421s
	[INFO] 10.244.0.7:53981 - 4808 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000097339s
	[INFO] 10.244.0.7:53981 - 42191 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000044342s
	[INFO] 10.244.0.7:43141 - 45928 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000088526s
	[INFO] 10.244.0.7:43141 - 5739 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000037809s
	[INFO] 10.244.0.22:36445 - 45319 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000493033s
	[INFO] 10.244.0.22:49831 - 2691 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000676654s
	[INFO] 10.244.0.22:36321 - 27743 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000109519s
	[INFO] 10.244.0.22:52203 - 3039 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000194331s
	[INFO] 10.244.0.22:34232 - 5470 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085146s
	[INFO] 10.244.0.22:48517 - 13797 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000165313s
	[INFO] 10.244.0.22:54554 - 26482 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000386098s
	[INFO] 10.244.0.22:52675 - 15920 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000654891s
	[INFO] 10.244.0.25:58470 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000332679s
	[INFO] 10.244.0.25:32810 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145927s
	
	
	==> describe nodes <==
	Name:               addons-465706
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-465706
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=addons-465706
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T10_45_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-465706
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 10:45:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-465706
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 10:49:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 10:48:16 +0000   Mon, 17 Jun 2024 10:45:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 10:48:16 +0000   Mon, 17 Jun 2024 10:45:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 10:48:16 +0000   Mon, 17 Jun 2024 10:45:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 10:48:16 +0000   Mon, 17 Jun 2024 10:45:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    addons-465706
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb267b5b0fce4947a99307aa5b63540f
	  System UUID:                bb267b5b-0fce-4947-a993-07aa5b63540f
	  Boot ID:                    2808e5a8-39d2-42d5-a6bb-91485b8144f2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-xb8zr         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gcp-auth                    gcp-auth-5db96cd9b4-5dp97                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  headlamp                    headlamp-7fc69f7444-b25bd                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  kube-system                 coredns-7db6d8ff4d-mdcv2                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m11s
	  kube-system                 etcd-addons-465706                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-apiserver-addons-465706             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-controller-manager-addons-465706    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 kube-proxy-v55ch                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-scheduler-addons-465706             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 metrics-server-c59844bb4-n7wsl           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m6s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-phsmj          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  Starting                 4m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m26s (x2 over 4m26s)  kubelet          Node addons-465706 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x2 over 4m26s)  kubelet          Node addons-465706 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x2 over 4m26s)  kubelet          Node addons-465706 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m25s                  kubelet          Node addons-465706 status is now: NodeReady
	  Normal  RegisteredNode           4m12s                  node-controller  Node addons-465706 event: Registered Node addons-465706 in Controller
	
	
	==> dmesg <==
	[  +0.075347] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.716820] systemd-fstab-generator[1493]: Ignoring "noauto" option for root device
	[  +0.158317] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.027028] kauditd_printk_skb: 92 callbacks suppressed
	[  +5.454368] kauditd_printk_skb: 115 callbacks suppressed
	[  +5.061087] kauditd_printk_skb: 110 callbacks suppressed
	[ +10.552428] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.616392] kauditd_printk_skb: 2 callbacks suppressed
	[Jun17 10:46] kauditd_printk_skb: 11 callbacks suppressed
	[ +13.190506] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.377689] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.069615] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.073777] kauditd_printk_skb: 49 callbacks suppressed
	[ +12.863963] kauditd_printk_skb: 3 callbacks suppressed
	[ +11.432781] kauditd_printk_skb: 52 callbacks suppressed
	[Jun17 10:47] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.052601] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.448552] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.301075] kauditd_printk_skb: 30 callbacks suppressed
	[ +23.987639] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.597970] kauditd_printk_skb: 13 callbacks suppressed
	[Jun17 10:48] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.318320] kauditd_printk_skb: 33 callbacks suppressed
	[Jun17 10:49] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.009145] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [32aaf27877c21f1872f89199888d6e46c7c128e1968884607b91b1ba82c84a09] <==
	{"level":"warn","ts":"2024-06-17T10:46:47.904865Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"385.442204ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15705899242528180052 > lease_revoke:<id:59f69025ccf8069e>","response":"size:28"}
	{"level":"info","ts":"2024-06-17T10:46:47.904939Z","caller":"traceutil/trace.go:171","msg":"trace[559885333] linearizableReadLoop","detail":"{readStateIndex:1183; appliedIndex:1182; }","duration":"327.904555ms","start":"2024-06-17T10:46:47.577023Z","end":"2024-06-17T10:46:47.904928Z","steps":["trace[559885333] 'read index received'  (duration: 27.261µs)","trace[559885333] 'applied index is now lower than readState.Index'  (duration: 327.876325ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-17T10:46:47.905075Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"328.040401ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-06-17T10:46:47.905111Z","caller":"traceutil/trace.go:171","msg":"trace[1092474629] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1147; }","duration":"328.104667ms","start":"2024-06-17T10:46:47.577Z","end":"2024-06-17T10:46:47.905105Z","steps":["trace[1092474629] 'agreement among raft nodes before linearized reading'  (duration: 327.985775ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T10:46:47.905132Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-17T10:46:47.576984Z","time spent":"328.142744ms","remote":"127.0.0.1:50784","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11476,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-06-17T10:46:47.905323Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.571997ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-06-17T10:46:47.90536Z","caller":"traceutil/trace.go:171","msg":"trace[99291614] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1147; }","duration":"171.62818ms","start":"2024-06-17T10:46:47.733727Z","end":"2024-06-17T10:46:47.905355Z","steps":["trace[99291614] 'agreement among raft nodes before linearized reading'  (duration: 171.531591ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T10:47:04.666521Z","caller":"traceutil/trace.go:171","msg":"trace[657713612] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1324; }","duration":"127.186757ms","start":"2024-06-17T10:47:04.53932Z","end":"2024-06-17T10:47:04.666507Z","steps":["trace[657713612] 'process raft request'  (duration: 127.031173ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T10:47:08.397759Z","caller":"traceutil/trace.go:171","msg":"trace[896851152] transaction","detail":"{read_only:false; response_revision:1374; number_of_response:1; }","duration":"134.548533ms","start":"2024-06-17T10:47:08.263196Z","end":"2024-06-17T10:47:08.397744Z","steps":["trace[896851152] 'process raft request'  (duration: 134.292671ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T10:47:10.612731Z","caller":"traceutil/trace.go:171","msg":"trace[1229839338] linearizableReadLoop","detail":"{readStateIndex:1426; appliedIndex:1425; }","duration":"385.036544ms","start":"2024-06-17T10:47:10.22768Z","end":"2024-06-17T10:47:10.612716Z","steps":["trace[1229839338] 'read index received'  (duration: 383.451884ms)","trace[1229839338] 'applied index is now lower than readState.Index'  (duration: 1.584143ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-17T10:47:10.612996Z","caller":"traceutil/trace.go:171","msg":"trace[2028357398] transaction","detail":"{read_only:false; response_revision:1382; number_of_response:1; }","duration":"386.263163ms","start":"2024-06-17T10:47:10.226723Z","end":"2024-06-17T10:47:10.612986Z","steps":["trace[2028357398] 'process raft request'  (duration: 384.447259ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T10:47:10.613103Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-17T10:47:10.226709Z","time spent":"386.336938ms","remote":"127.0.0.1:50676","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":782,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-768f948f8f-624bl.17d9c4e277f93d76\" mod_revision:1170 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-768f948f8f-624bl.17d9c4e277f93d76\" value_size:675 lease:6482527205673403666 >> failure:<request_range:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-768f948f8f-624bl.17d9c4e277f93d76\" > >"}
	{"level":"warn","ts":"2024-06-17T10:47:10.613362Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"385.673555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-17T10:47:10.613419Z","caller":"traceutil/trace.go:171","msg":"trace[1513413840] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; response_count:0; response_revision:1382; }","duration":"385.750025ms","start":"2024-06-17T10:47:10.227662Z","end":"2024-06-17T10:47:10.613412Z","steps":["trace[1513413840] 'agreement among raft nodes before linearized reading'  (duration: 385.668605ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T10:47:10.613536Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-17T10:47:10.227652Z","time spent":"385.876004ms","remote":"127.0.0.1:50994","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":30,"request content":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true "}
	{"level":"warn","ts":"2024-06-17T10:47:10.613704Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"293.892105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3966"}
	{"level":"info","ts":"2024-06-17T10:47:10.613742Z","caller":"traceutil/trace.go:171","msg":"trace[1081912122] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1382; }","duration":"294.007757ms","start":"2024-06-17T10:47:10.319728Z","end":"2024-06-17T10:47:10.613735Z","steps":["trace[1081912122] 'agreement among raft nodes before linearized reading'  (duration: 293.917336ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T10:47:10.613918Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.859643ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6370"}
	{"level":"info","ts":"2024-06-17T10:47:10.613956Z","caller":"traceutil/trace.go:171","msg":"trace[237034393] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1382; }","duration":"108.918493ms","start":"2024-06-17T10:47:10.505032Z","end":"2024-06-17T10:47:10.613951Z","steps":["trace[237034393] 'agreement among raft nodes before linearized reading'  (duration: 108.837542ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T10:47:20.812309Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.144198ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15705899242528180915 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/local-path-storage/helper-pod-create-pvc-f296beee-9e3b-4086-a049-00efb1334af0.17d9c4e4df90c375\" mod_revision:1242 > success:<request_delete_range:<key:\"/registry/events/local-path-storage/helper-pod-create-pvc-f296beee-9e3b-4086-a049-00efb1334af0.17d9c4e4df90c375\" > > failure:<request_range:<key:\"/registry/events/local-path-storage/helper-pod-create-pvc-f296beee-9e3b-4086-a049-00efb1334af0.17d9c4e4df90c375\" > >>","response":"size:18"}
	{"level":"info","ts":"2024-06-17T10:47:20.812832Z","caller":"traceutil/trace.go:171","msg":"trace[1618511398] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1455; }","duration":"194.030768ms","start":"2024-06-17T10:47:20.618785Z","end":"2024-06-17T10:47:20.812816Z","steps":["trace[1618511398] 'process raft request'  (duration: 86.100799ms)","trace[1618511398] 'compare'  (duration: 106.912298ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-17T10:47:46.844204Z","caller":"traceutil/trace.go:171","msg":"trace[977501368] transaction","detail":"{read_only:false; response_revision:1522; number_of_response:1; }","duration":"330.583817ms","start":"2024-06-17T10:47:46.513583Z","end":"2024-06-17T10:47:46.844167Z","steps":["trace[977501368] 'process raft request'  (duration: 330.466371ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T10:47:46.844417Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-17T10:47:46.513569Z","time spent":"330.740164ms","remote":"127.0.0.1:50872","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1509 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-06-17T10:47:46.848894Z","caller":"traceutil/trace.go:171","msg":"trace[480039458] transaction","detail":"{read_only:false; response_revision:1523; number_of_response:1; }","duration":"182.123885ms","start":"2024-06-17T10:47:46.666756Z","end":"2024-06-17T10:47:46.84888Z","steps":["trace[480039458] 'process raft request'  (duration: 182.060188ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T10:47:51.923721Z","caller":"traceutil/trace.go:171","msg":"trace[1659140217] transaction","detail":"{read_only:false; response_revision:1538; number_of_response:1; }","duration":"143.46256ms","start":"2024-06-17T10:47:51.780242Z","end":"2024-06-17T10:47:51.923704Z","steps":["trace[1659140217] 'process raft request'  (duration: 143.267563ms)"],"step_count":1}
	
	
	==> gcp-auth [ebb02f1a32711f02bfa7db92ba295caa4c8d9d29515048c64d2de9e327609872] <==
	2024/06/17 10:46:50 GCP Auth Webhook started!
	2024/06/17 10:46:57 Ready to marshal response ...
	2024/06/17 10:46:57 Ready to write response ...
	2024/06/17 10:46:57 Ready to marshal response ...
	2024/06/17 10:46:57 Ready to write response ...
	2024/06/17 10:46:57 Ready to marshal response ...
	2024/06/17 10:46:57 Ready to write response ...
	2024/06/17 10:47:02 Ready to marshal response ...
	2024/06/17 10:47:02 Ready to write response ...
	2024/06/17 10:47:03 Ready to marshal response ...
	2024/06/17 10:47:03 Ready to write response ...
	2024/06/17 10:47:03 Ready to marshal response ...
	2024/06/17 10:47:03 Ready to write response ...
	2024/06/17 10:47:03 Ready to marshal response ...
	2024/06/17 10:47:03 Ready to write response ...
	2024/06/17 10:47:07 Ready to marshal response ...
	2024/06/17 10:47:07 Ready to write response ...
	2024/06/17 10:47:14 Ready to marshal response ...
	2024/06/17 10:47:14 Ready to write response ...
	2024/06/17 10:47:42 Ready to marshal response ...
	2024/06/17 10:47:42 Ready to write response ...
	2024/06/17 10:48:06 Ready to marshal response ...
	2024/06/17 10:48:06 Ready to write response ...
	2024/06/17 10:49:28 Ready to marshal response ...
	2024/06/17 10:49:28 Ready to write response ...
	
	
	==> kernel <==
	 10:49:39 up 5 min,  0 users,  load average: 0.63, 1.07, 0.54
	Linux addons-465706 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6981a9b7f93a47089761b31a184fb2e378c9384b3ff6a8fd6b36c028808740f0] <==
	E0617 10:47:13.699273       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.93.131:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.93.131:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.93.131:443: connect: connection refused
	W0617 10:47:13.703824       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 10:47:13.704065       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0617 10:47:13.706878       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.93.131:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.93.131:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.93.131:443: connect: connection refused
	E0617 10:47:13.708194       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.93.131:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.93.131:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.93.131:443: connect: connection refused
	E0617 10:47:13.718754       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.93.131:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.93.131:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.93.131:443: connect: connection refused
	I0617 10:47:13.830660       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0617 10:47:30.950676       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0617 10:47:53.197343       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0617 10:48:04.047009       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0617 10:48:05.076506       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0617 10:48:22.722027       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0617 10:48:22.722087       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0617 10:48:22.798403       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0617 10:48:22.798588       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0617 10:48:22.839107       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0617 10:48:22.839206       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0617 10:48:22.874817       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0617 10:48:22.874868       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0617 10:48:23.821026       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0617 10:48:23.875207       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0617 10:48:23.879906       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0617 10:49:28.647296       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.157.242"}
	E0617 10:49:31.679293       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [a2d1cd8b31398e19d08dd55347ee59581d9b378824a6b55badfacfb07bd3e6a3] <==
	W0617 10:48:42.012331       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:48:42.012384       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 10:48:42.903477       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:48:42.903578       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 10:48:42.967345       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:48:42.967397       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 10:48:52.665134       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:48:52.665186       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 10:49:06.694924       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:49:06.695039       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 10:49:08.316850       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:49:08.316954       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 10:49:22.814823       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:49:22.814884       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0617 10:49:28.500896       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="38.508671ms"
	I0617 10:49:28.508731       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="7.719276ms"
	I0617 10:49:28.508982       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="31.315µs"
	I0617 10:49:28.514510       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="32.519µs"
	I0617 10:49:31.577302       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0617 10:49:31.583295       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0617 10:49:31.586908       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="3.683µs"
	I0617 10:49:31.874359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="6.629963ms"
	I0617 10:49:31.875352       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="25.672µs"
	W0617 10:49:33.435631       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:49:33.435674       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [8182630f40dc3077251c143e1d0843b74fc2f903db0c6bb7de61a5003651ce42] <==
	I0617 10:45:31.263957       1 server_linux.go:69] "Using iptables proxy"
	I0617 10:45:31.301600       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.165"]
	I0617 10:45:32.137792       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0617 10:45:32.137854       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0617 10:45:32.137892       1 server_linux.go:165] "Using iptables Proxier"
	I0617 10:45:32.219710       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 10:45:32.219930       1 server.go:872] "Version info" version="v1.30.1"
	I0617 10:45:32.219946       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 10:45:32.222800       1 config.go:192] "Starting service config controller"
	I0617 10:45:32.222831       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 10:45:32.222853       1 config.go:101] "Starting endpoint slice config controller"
	I0617 10:45:32.222857       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0617 10:45:32.224794       1 config.go:319] "Starting node config controller"
	I0617 10:45:32.224826       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 10:45:32.323275       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0617 10:45:32.323340       1 shared_informer.go:320] Caches are synced for service config
	I0617 10:45:32.325561       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bbbcc46101fca247086c67e958f8de3c1a294b6b24e57f2589442f78e8f1ea91] <==
	W0617 10:45:10.537775       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0617 10:45:10.537825       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 10:45:11.412156       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0617 10:45:11.412204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0617 10:45:11.445783       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0617 10:45:11.445830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0617 10:45:11.491296       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0617 10:45:11.491343       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0617 10:45:11.508083       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0617 10:45:11.508218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0617 10:45:11.533380       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0617 10:45:11.533520       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0617 10:45:11.543182       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0617 10:45:11.543253       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0617 10:45:11.584971       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0617 10:45:11.585123       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 10:45:11.603102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0617 10:45:11.604190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0617 10:45:11.642145       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0617 10:45:11.642190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0617 10:45:11.715680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0617 10:45:11.715728       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0617 10:45:11.767491       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0617 10:45:11.767536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0617 10:45:13.931004       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 17 10:49:28 addons-465706 kubelet[1274]: I0617 10:49:28.494709    1274 memory_manager.go:354] "RemoveStaleState removing state" podUID="704705e9-4f4b-4176-be37-424df07e8f4a" containerName="node-driver-registrar"
	Jun 17 10:49:28 addons-465706 kubelet[1274]: I0617 10:49:28.494740    1274 memory_manager.go:354] "RemoveStaleState removing state" podUID="704705e9-4f4b-4176-be37-424df07e8f4a" containerName="hostpath"
	Jun 17 10:49:28 addons-465706 kubelet[1274]: I0617 10:49:28.494774    1274 memory_manager.go:354] "RemoveStaleState removing state" podUID="597fa742-0125-4713-8630-8191b4941bb0" containerName="volume-snapshot-controller"
	Jun 17 10:49:28 addons-465706 kubelet[1274]: I0617 10:49:28.494804    1274 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a12dde-1859-4807-90f3-4e9f15f0acee" containerName="csi-attacher"
	Jun 17 10:49:28 addons-465706 kubelet[1274]: I0617 10:49:28.494834    1274 memory_manager.go:354] "RemoveStaleState removing state" podUID="704705e9-4f4b-4176-be37-424df07e8f4a" containerName="csi-external-health-monitor-controller"
	Jun 17 10:49:28 addons-465706 kubelet[1274]: I0617 10:49:28.666396    1274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c5e753cb-3461-4aa7-bf40-adb3f9b66766-gcp-creds\") pod \"hello-world-app-86c47465fc-xb8zr\" (UID: \"c5e753cb-3461-4aa7-bf40-adb3f9b66766\") " pod="default/hello-world-app-86c47465fc-xb8zr"
	Jun 17 10:49:28 addons-465706 kubelet[1274]: I0617 10:49:28.666726    1274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss697\" (UniqueName: \"kubernetes.io/projected/c5e753cb-3461-4aa7-bf40-adb3f9b66766-kube-api-access-ss697\") pod \"hello-world-app-86c47465fc-xb8zr\" (UID: \"c5e753cb-3461-4aa7-bf40-adb3f9b66766\") " pod="default/hello-world-app-86c47465fc-xb8zr"
	Jun 17 10:49:29 addons-465706 kubelet[1274]: I0617 10:49:29.977043    1274 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8q4p\" (UniqueName: \"kubernetes.io/projected/5887752c-36aa-4a81-a049-587806fdceb7-kube-api-access-v8q4p\") pod \"5887752c-36aa-4a81-a049-587806fdceb7\" (UID: \"5887752c-36aa-4a81-a049-587806fdceb7\") "
	Jun 17 10:49:29 addons-465706 kubelet[1274]: I0617 10:49:29.987052    1274 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5887752c-36aa-4a81-a049-587806fdceb7-kube-api-access-v8q4p" (OuterVolumeSpecName: "kube-api-access-v8q4p") pod "5887752c-36aa-4a81-a049-587806fdceb7" (UID: "5887752c-36aa-4a81-a049-587806fdceb7"). InnerVolumeSpecName "kube-api-access-v8q4p". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 17 10:49:30 addons-465706 kubelet[1274]: I0617 10:49:30.078199    1274 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-v8q4p\" (UniqueName: \"kubernetes.io/projected/5887752c-36aa-4a81-a049-587806fdceb7-kube-api-access-v8q4p\") on node \"addons-465706\" DevicePath \"\""
	Jun 17 10:49:30 addons-465706 kubelet[1274]: I0617 10:49:30.846390    1274 scope.go:117] "RemoveContainer" containerID="4ed708a2b62c61ede84f5179b2500eda2335d1ce660ccc177d99a733aa8d05af"
	Jun 17 10:49:31 addons-465706 kubelet[1274]: I0617 10:49:31.114556    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5887752c-36aa-4a81-a049-587806fdceb7" path="/var/lib/kubelet/pods/5887752c-36aa-4a81-a049-587806fdceb7/volumes"
	Jun 17 10:49:33 addons-465706 kubelet[1274]: I0617 10:49:33.114255    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="230a0f87-4965-4d7a-b368-11afefb6dec0" path="/var/lib/kubelet/pods/230a0f87-4965-4d7a-b368-11afefb6dec0/volumes"
	Jun 17 10:49:33 addons-465706 kubelet[1274]: I0617 10:49:33.115013    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a719b15-34ab-41aa-ba56-5df632aa3334" path="/var/lib/kubelet/pods/8a719b15-34ab-41aa-ba56-5df632aa3334/volumes"
	Jun 17 10:49:34 addons-465706 kubelet[1274]: I0617 10:49:34.813648    1274 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a3c3fd4a-57bb-4d89-b6b9-57f6991a9c06-webhook-cert\") pod \"a3c3fd4a-57bb-4d89-b6b9-57f6991a9c06\" (UID: \"a3c3fd4a-57bb-4d89-b6b9-57f6991a9c06\") "
	Jun 17 10:49:34 addons-465706 kubelet[1274]: I0617 10:49:34.813679    1274 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dctzz\" (UniqueName: \"kubernetes.io/projected/a3c3fd4a-57bb-4d89-b6b9-57f6991a9c06-kube-api-access-dctzz\") pod \"a3c3fd4a-57bb-4d89-b6b9-57f6991a9c06\" (UID: \"a3c3fd4a-57bb-4d89-b6b9-57f6991a9c06\") "
	Jun 17 10:49:34 addons-465706 kubelet[1274]: I0617 10:49:34.818054    1274 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3c3fd4a-57bb-4d89-b6b9-57f6991a9c06-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a3c3fd4a-57bb-4d89-b6b9-57f6991a9c06" (UID: "a3c3fd4a-57bb-4d89-b6b9-57f6991a9c06"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 17 10:49:34 addons-465706 kubelet[1274]: I0617 10:49:34.817545    1274 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3c3fd4a-57bb-4d89-b6b9-57f6991a9c06-kube-api-access-dctzz" (OuterVolumeSpecName: "kube-api-access-dctzz") pod "a3c3fd4a-57bb-4d89-b6b9-57f6991a9c06" (UID: "a3c3fd4a-57bb-4d89-b6b9-57f6991a9c06"). InnerVolumeSpecName "kube-api-access-dctzz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 17 10:49:34 addons-465706 kubelet[1274]: I0617 10:49:34.869121    1274 scope.go:117] "RemoveContainer" containerID="41826447c194be0efa7996f40d4b68d7d0d9f7958a0d3da2472211f860a776a0"
	Jun 17 10:49:34 addons-465706 kubelet[1274]: I0617 10:49:34.890105    1274 scope.go:117] "RemoveContainer" containerID="41826447c194be0efa7996f40d4b68d7d0d9f7958a0d3da2472211f860a776a0"
	Jun 17 10:49:34 addons-465706 kubelet[1274]: E0617 10:49:34.890662    1274 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41826447c194be0efa7996f40d4b68d7d0d9f7958a0d3da2472211f860a776a0\": container with ID starting with 41826447c194be0efa7996f40d4b68d7d0d9f7958a0d3da2472211f860a776a0 not found: ID does not exist" containerID="41826447c194be0efa7996f40d4b68d7d0d9f7958a0d3da2472211f860a776a0"
	Jun 17 10:49:34 addons-465706 kubelet[1274]: I0617 10:49:34.890712    1274 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41826447c194be0efa7996f40d4b68d7d0d9f7958a0d3da2472211f860a776a0"} err="failed to get container status \"41826447c194be0efa7996f40d4b68d7d0d9f7958a0d3da2472211f860a776a0\": rpc error: code = NotFound desc = could not find container \"41826447c194be0efa7996f40d4b68d7d0d9f7958a0d3da2472211f860a776a0\": container with ID starting with 41826447c194be0efa7996f40d4b68d7d0d9f7958a0d3da2472211f860a776a0 not found: ID does not exist"
	Jun 17 10:49:34 addons-465706 kubelet[1274]: I0617 10:49:34.914156    1274 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dctzz\" (UniqueName: \"kubernetes.io/projected/a3c3fd4a-57bb-4d89-b6b9-57f6991a9c06-kube-api-access-dctzz\") on node \"addons-465706\" DevicePath \"\""
	Jun 17 10:49:34 addons-465706 kubelet[1274]: I0617 10:49:34.914197    1274 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a3c3fd4a-57bb-4d89-b6b9-57f6991a9c06-webhook-cert\") on node \"addons-465706\" DevicePath \"\""
	Jun 17 10:49:35 addons-465706 kubelet[1274]: I0617 10:49:35.111807    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3c3fd4a-57bb-4d89-b6b9-57f6991a9c06" path="/var/lib/kubelet/pods/a3c3fd4a-57bb-4d89-b6b9-57f6991a9c06/volumes"
	
	
	==> storage-provisioner [d2ffe2c0522573c6fb44e03297f5ade6ae49c1b346b92c335d0179921042fc45] <==
	I0617 10:45:36.654583       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0617 10:45:36.697233       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0617 10:45:36.697372       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0617 10:45:36.735672       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0617 10:45:36.745870       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6b81ea56-8f03-416e-952e-e31581071fc3", APIVersion:"v1", ResourceVersion:"718", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-465706_ea7e3310-827f-4f80-9650-4d454d888578 became leader
	I0617 10:45:36.747608       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-465706_ea7e3310-827f-4f80-9650-4d454d888578!
	I0617 10:45:36.850940       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-465706_ea7e3310-827f-4f80-9650-4d454d888578!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-465706 -n addons-465706
helpers_test.go:261: (dbg) Run:  kubectl --context addons-465706 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (332.77s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.954708ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-n7wsl" [9cffe86c-6fa6-4955-a42c-234714e1bd11] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005105515s
addons_test.go:417: (dbg) Run:  kubectl --context addons-465706 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-465706 top pods -n kube-system: exit status 1 (72.350827ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-465706, age: 2m9.409755187s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-465706 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-465706 top pods -n kube-system: exit status 1 (65.810924ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-465706, age: 2m11.301084755s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-465706 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-465706 top pods -n kube-system: exit status 1 (64.173822ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdcv2, age: 2m1.467341481s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-465706 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-465706 top pods -n kube-system: exit status 1 (67.219814ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdcv2, age: 2m10.919945257s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-465706 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-465706 top pods -n kube-system: exit status 1 (71.631026ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdcv2, age: 2m23.018264052s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-465706 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-465706 top pods -n kube-system: exit status 1 (72.247747ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdcv2, age: 2m41.905208092s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-465706 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-465706 top pods -n kube-system: exit status 1 (65.830697ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdcv2, age: 2m56.72866947s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-465706 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-465706 top pods -n kube-system: exit status 1 (62.169155ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdcv2, age: 3m33.494759428s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-465706 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-465706 top pods -n kube-system: exit status 1 (93.820092ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdcv2, age: 4m3.012936354s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-465706 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-465706 top pods -n kube-system: exit status 1 (64.575017ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdcv2, age: 4m50.165584575s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-465706 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-465706 top pods -n kube-system: exit status 1 (65.800987ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdcv2, age: 5m59.732639077s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-465706 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-465706 top pods -n kube-system: exit status 1 (65.474063ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdcv2, age: 6m36.159940615s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-465706 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-465706 top pods -n kube-system: exit status 1 (61.599062ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdcv2, age: 7m19.383010875s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-465706 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-465706 -n addons-465706
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-465706 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-465706 logs -n 25: (1.364092154s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC | 17 Jun 24 10:44 UTC |
	| delete  | -p download-only-999061                                                                     | download-only-999061 | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC | 17 Jun 24 10:44 UTC |
	| delete  | -p download-only-033984                                                                     | download-only-033984 | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC | 17 Jun 24 10:44 UTC |
	| delete  | -p download-only-999061                                                                     | download-only-999061 | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC | 17 Jun 24 10:44 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-716953 | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC |                     |
	|         | binary-mirror-716953                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44727                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-716953                                                                     | binary-mirror-716953 | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC | 17 Jun 24 10:44 UTC |
	| addons  | enable dashboard -p                                                                         | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC |                     |
	|         | addons-465706                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC |                     |
	|         | addons-465706                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-465706 --wait=true                                                                | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC | 17 Jun 24 10:46 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:46 UTC | 17 Jun 24 10:46 UTC |
	|         | -p addons-465706                                                                            |                      |         |         |                     |                     |
	| addons  | addons-465706 addons disable                                                                | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:47 UTC | 17 Jun 24 10:47 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:47 UTC | 17 Jun 24 10:47 UTC |
	|         | -p addons-465706                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:47 UTC | 17 Jun 24 10:47 UTC |
	|         | addons-465706                                                                               |                      |         |         |                     |                     |
	| ip      | addons-465706 ip                                                                            | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:47 UTC | 17 Jun 24 10:47 UTC |
	| addons  | addons-465706 addons disable                                                                | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:47 UTC | 17 Jun 24 10:47 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-465706 ssh cat                                                                       | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:47 UTC | 17 Jun 24 10:47 UTC |
	|         | /opt/local-path-provisioner/pvc-f296beee-9e3b-4086-a049-00efb1334af0_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-465706 addons disable                                                                | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:47 UTC | 17 Jun 24 10:47 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-465706 ssh curl -s                                                                   | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:47 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:48 UTC | 17 Jun 24 10:48 UTC |
	|         | addons-465706                                                                               |                      |         |         |                     |                     |
	| addons  | addons-465706 addons                                                                        | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:48 UTC | 17 Jun 24 10:48 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-465706 addons                                                                        | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:48 UTC | 17 Jun 24 10:48 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-465706 ip                                                                            | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:49 UTC | 17 Jun 24 10:49 UTC |
	| addons  | addons-465706 addons disable                                                                | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:49 UTC | 17 Jun 24 10:49 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-465706 addons disable                                                                | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:49 UTC | 17 Jun 24 10:49 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-465706 addons                                                                        | addons-465706        | jenkins | v1.33.1 | 17 Jun 24 10:52 UTC | 17 Jun 24 10:52 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 10:44:27
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 10:44:27.955434  120744 out.go:291] Setting OutFile to fd 1 ...
	I0617 10:44:27.955608  120744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 10:44:27.955618  120744 out.go:304] Setting ErrFile to fd 2...
	I0617 10:44:27.955623  120744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 10:44:27.955818  120744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 10:44:27.956449  120744 out.go:298] Setting JSON to false
	I0617 10:44:27.957418  120744 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1615,"bootTime":1718619453,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 10:44:27.957481  120744 start.go:139] virtualization: kvm guest
	I0617 10:44:27.959489  120744 out.go:177] * [addons-465706] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 10:44:27.960639  120744 notify.go:220] Checking for updates...
	I0617 10:44:27.960647  120744 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 10:44:27.962147  120744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 10:44:27.963411  120744 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 10:44:27.964894  120744 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 10:44:27.966317  120744 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 10:44:27.967418  120744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 10:44:27.968881  120744 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 10:44:28.000585  120744 out.go:177] * Using the kvm2 driver based on user configuration
	I0617 10:44:28.001772  120744 start.go:297] selected driver: kvm2
	I0617 10:44:28.001787  120744 start.go:901] validating driver "kvm2" against <nil>
	I0617 10:44:28.001803  120744 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 10:44:28.002465  120744 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 10:44:28.002525  120744 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 10:44:28.017207  120744 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 10:44:28.017258  120744 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 10:44:28.017507  120744 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 10:44:28.017536  120744 cni.go:84] Creating CNI manager for ""
	I0617 10:44:28.017543  120744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 10:44:28.017549  120744 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 10:44:28.017604  120744 start.go:340] cluster config:
	{Name:addons-465706 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-465706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 10:44:28.017711  120744 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 10:44:28.019300  120744 out.go:177] * Starting "addons-465706" primary control-plane node in "addons-465706" cluster
	I0617 10:44:28.020368  120744 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 10:44:28.020400  120744 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 10:44:28.020408  120744 cache.go:56] Caching tarball of preloaded images
	I0617 10:44:28.020482  120744 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 10:44:28.020492  120744 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 10:44:28.020826  120744 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/config.json ...
	I0617 10:44:28.020848  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/config.json: {Name:mkffc5f87639ab857d7a39c36743c03a7f1d71d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:44:28.020969  120744 start.go:360] acquireMachinesLock for addons-465706: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 10:44:28.021010  120744 start.go:364] duration metric: took 28.888µs to acquireMachinesLock for "addons-465706"
	I0617 10:44:28.021026  120744 start.go:93] Provisioning new machine with config: &{Name:addons-465706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-465706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 10:44:28.021073  120744 start.go:125] createHost starting for "" (driver="kvm2")
	I0617 10:44:28.022439  120744 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0617 10:44:28.022562  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:44:28.022611  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:44:28.036677  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
	I0617 10:44:28.037174  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:44:28.037751  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:44:28.037772  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:44:28.038172  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:44:28.038397  120744 main.go:141] libmachine: (addons-465706) Calling .GetMachineName
	I0617 10:44:28.038557  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:44:28.038774  120744 start.go:159] libmachine.API.Create for "addons-465706" (driver="kvm2")
	I0617 10:44:28.038801  120744 client.go:168] LocalClient.Create starting
	I0617 10:44:28.038840  120744 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem
	I0617 10:44:28.381448  120744 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem
	I0617 10:44:28.499704  120744 main.go:141] libmachine: Running pre-create checks...
	I0617 10:44:28.499733  120744 main.go:141] libmachine: (addons-465706) Calling .PreCreateCheck
	I0617 10:44:28.500260  120744 main.go:141] libmachine: (addons-465706) Calling .GetConfigRaw
	I0617 10:44:28.500885  120744 main.go:141] libmachine: Creating machine...
	I0617 10:44:28.500899  120744 main.go:141] libmachine: (addons-465706) Calling .Create
	I0617 10:44:28.501059  120744 main.go:141] libmachine: (addons-465706) Creating KVM machine...
	I0617 10:44:28.502437  120744 main.go:141] libmachine: (addons-465706) DBG | found existing default KVM network
	I0617 10:44:28.503224  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:28.503082  120766 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c10}
	I0617 10:44:28.503282  120744 main.go:141] libmachine: (addons-465706) DBG | created network xml: 
	I0617 10:44:28.503309  120744 main.go:141] libmachine: (addons-465706) DBG | <network>
	I0617 10:44:28.503353  120744 main.go:141] libmachine: (addons-465706) DBG |   <name>mk-addons-465706</name>
	I0617 10:44:28.503373  120744 main.go:141] libmachine: (addons-465706) DBG |   <dns enable='no'/>
	I0617 10:44:28.503380  120744 main.go:141] libmachine: (addons-465706) DBG |   
	I0617 10:44:28.503386  120744 main.go:141] libmachine: (addons-465706) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0617 10:44:28.503393  120744 main.go:141] libmachine: (addons-465706) DBG |     <dhcp>
	I0617 10:44:28.503398  120744 main.go:141] libmachine: (addons-465706) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0617 10:44:28.503404  120744 main.go:141] libmachine: (addons-465706) DBG |     </dhcp>
	I0617 10:44:28.503410  120744 main.go:141] libmachine: (addons-465706) DBG |   </ip>
	I0617 10:44:28.503414  120744 main.go:141] libmachine: (addons-465706) DBG |   
	I0617 10:44:28.503419  120744 main.go:141] libmachine: (addons-465706) DBG | </network>
	I0617 10:44:28.503427  120744 main.go:141] libmachine: (addons-465706) DBG | 
	I0617 10:44:28.508778  120744 main.go:141] libmachine: (addons-465706) DBG | trying to create private KVM network mk-addons-465706 192.168.39.0/24...
	I0617 10:44:28.575241  120744 main.go:141] libmachine: (addons-465706) DBG | private KVM network mk-addons-465706 192.168.39.0/24 created
	I0617 10:44:28.575274  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:28.575193  120766 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 10:44:28.575308  120744 main.go:141] libmachine: (addons-465706) Setting up store path in /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706 ...
	I0617 10:44:28.575331  120744 main.go:141] libmachine: (addons-465706) Building disk image from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso
	I0617 10:44:28.575359  120744 main.go:141] libmachine: (addons-465706) Downloading /home/jenkins/minikube-integration/19084-112967/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0617 10:44:28.813936  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:28.813743  120766 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa...
	I0617 10:44:28.942638  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:28.942504  120766 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/addons-465706.rawdisk...
	I0617 10:44:28.942666  120744 main.go:141] libmachine: (addons-465706) DBG | Writing magic tar header
	I0617 10:44:28.942745  120744 main.go:141] libmachine: (addons-465706) DBG | Writing SSH key tar header
	I0617 10:44:28.942793  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:28.942642  120766 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706 ...
	I0617 10:44:28.942823  120744 main.go:141] libmachine: (addons-465706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706
	I0617 10:44:28.942844  120744 main.go:141] libmachine: (addons-465706) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706 (perms=drwx------)
	I0617 10:44:28.942876  120744 main.go:141] libmachine: (addons-465706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines
	I0617 10:44:28.942890  120744 main.go:141] libmachine: (addons-465706) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines (perms=drwxr-xr-x)
	I0617 10:44:28.942903  120744 main.go:141] libmachine: (addons-465706) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube (perms=drwxr-xr-x)
	I0617 10:44:28.942912  120744 main.go:141] libmachine: (addons-465706) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967 (perms=drwxrwxr-x)
	I0617 10:44:28.942920  120744 main.go:141] libmachine: (addons-465706) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0617 10:44:28.942926  120744 main.go:141] libmachine: (addons-465706) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0617 10:44:28.942938  120744 main.go:141] libmachine: (addons-465706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 10:44:28.942947  120744 main.go:141] libmachine: (addons-465706) Creating domain...
	I0617 10:44:28.942961  120744 main.go:141] libmachine: (addons-465706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967
	I0617 10:44:28.942972  120744 main.go:141] libmachine: (addons-465706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0617 10:44:28.942984  120744 main.go:141] libmachine: (addons-465706) DBG | Checking permissions on dir: /home/jenkins
	I0617 10:44:28.942995  120744 main.go:141] libmachine: (addons-465706) DBG | Checking permissions on dir: /home
	I0617 10:44:28.943005  120744 main.go:141] libmachine: (addons-465706) DBG | Skipping /home - not owner
	I0617 10:44:28.944173  120744 main.go:141] libmachine: (addons-465706) define libvirt domain using xml: 
	I0617 10:44:28.944216  120744 main.go:141] libmachine: (addons-465706) <domain type='kvm'>
	I0617 10:44:28.944226  120744 main.go:141] libmachine: (addons-465706)   <name>addons-465706</name>
	I0617 10:44:28.944235  120744 main.go:141] libmachine: (addons-465706)   <memory unit='MiB'>4000</memory>
	I0617 10:44:28.944241  120744 main.go:141] libmachine: (addons-465706)   <vcpu>2</vcpu>
	I0617 10:44:28.944245  120744 main.go:141] libmachine: (addons-465706)   <features>
	I0617 10:44:28.944251  120744 main.go:141] libmachine: (addons-465706)     <acpi/>
	I0617 10:44:28.944258  120744 main.go:141] libmachine: (addons-465706)     <apic/>
	I0617 10:44:28.944263  120744 main.go:141] libmachine: (addons-465706)     <pae/>
	I0617 10:44:28.944270  120744 main.go:141] libmachine: (addons-465706)     
	I0617 10:44:28.944275  120744 main.go:141] libmachine: (addons-465706)   </features>
	I0617 10:44:28.944281  120744 main.go:141] libmachine: (addons-465706)   <cpu mode='host-passthrough'>
	I0617 10:44:28.944286  120744 main.go:141] libmachine: (addons-465706)   
	I0617 10:44:28.944304  120744 main.go:141] libmachine: (addons-465706)   </cpu>
	I0617 10:44:28.944311  120744 main.go:141] libmachine: (addons-465706)   <os>
	I0617 10:44:28.944317  120744 main.go:141] libmachine: (addons-465706)     <type>hvm</type>
	I0617 10:44:28.944324  120744 main.go:141] libmachine: (addons-465706)     <boot dev='cdrom'/>
	I0617 10:44:28.944329  120744 main.go:141] libmachine: (addons-465706)     <boot dev='hd'/>
	I0617 10:44:28.944337  120744 main.go:141] libmachine: (addons-465706)     <bootmenu enable='no'/>
	I0617 10:44:28.944370  120744 main.go:141] libmachine: (addons-465706)   </os>
	I0617 10:44:28.944391  120744 main.go:141] libmachine: (addons-465706)   <devices>
	I0617 10:44:28.944405  120744 main.go:141] libmachine: (addons-465706)     <disk type='file' device='cdrom'>
	I0617 10:44:28.944421  120744 main.go:141] libmachine: (addons-465706)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/boot2docker.iso'/>
	I0617 10:44:28.944435  120744 main.go:141] libmachine: (addons-465706)       <target dev='hdc' bus='scsi'/>
	I0617 10:44:28.944446  120744 main.go:141] libmachine: (addons-465706)       <readonly/>
	I0617 10:44:28.944458  120744 main.go:141] libmachine: (addons-465706)     </disk>
	I0617 10:44:28.944470  120744 main.go:141] libmachine: (addons-465706)     <disk type='file' device='disk'>
	I0617 10:44:28.944496  120744 main.go:141] libmachine: (addons-465706)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0617 10:44:28.944518  120744 main.go:141] libmachine: (addons-465706)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/addons-465706.rawdisk'/>
	I0617 10:44:28.944531  120744 main.go:141] libmachine: (addons-465706)       <target dev='hda' bus='virtio'/>
	I0617 10:44:28.944542  120744 main.go:141] libmachine: (addons-465706)     </disk>
	I0617 10:44:28.944554  120744 main.go:141] libmachine: (addons-465706)     <interface type='network'>
	I0617 10:44:28.944567  120744 main.go:141] libmachine: (addons-465706)       <source network='mk-addons-465706'/>
	I0617 10:44:28.944579  120744 main.go:141] libmachine: (addons-465706)       <model type='virtio'/>
	I0617 10:44:28.944590  120744 main.go:141] libmachine: (addons-465706)     </interface>
	I0617 10:44:28.944601  120744 main.go:141] libmachine: (addons-465706)     <interface type='network'>
	I0617 10:44:28.944610  120744 main.go:141] libmachine: (addons-465706)       <source network='default'/>
	I0617 10:44:28.944616  120744 main.go:141] libmachine: (addons-465706)       <model type='virtio'/>
	I0617 10:44:28.944622  120744 main.go:141] libmachine: (addons-465706)     </interface>
	I0617 10:44:28.944631  120744 main.go:141] libmachine: (addons-465706)     <serial type='pty'>
	I0617 10:44:28.944638  120744 main.go:141] libmachine: (addons-465706)       <target port='0'/>
	I0617 10:44:28.944643  120744 main.go:141] libmachine: (addons-465706)     </serial>
	I0617 10:44:28.944653  120744 main.go:141] libmachine: (addons-465706)     <console type='pty'>
	I0617 10:44:28.944659  120744 main.go:141] libmachine: (addons-465706)       <target type='serial' port='0'/>
	I0617 10:44:28.944665  120744 main.go:141] libmachine: (addons-465706)     </console>
	I0617 10:44:28.944671  120744 main.go:141] libmachine: (addons-465706)     <rng model='virtio'>
	I0617 10:44:28.944679  120744 main.go:141] libmachine: (addons-465706)       <backend model='random'>/dev/random</backend>
	I0617 10:44:28.944684  120744 main.go:141] libmachine: (addons-465706)     </rng>
	I0617 10:44:28.944694  120744 main.go:141] libmachine: (addons-465706)     
	I0617 10:44:28.944699  120744 main.go:141] libmachine: (addons-465706)     
	I0617 10:44:28.944706  120744 main.go:141] libmachine: (addons-465706)   </devices>
	I0617 10:44:28.944724  120744 main.go:141] libmachine: (addons-465706) </domain>
	I0617 10:44:28.944741  120744 main.go:141] libmachine: (addons-465706) 
	I0617 10:44:28.950418  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:85:f6:97 in network default
	I0617 10:44:28.950926  120744 main.go:141] libmachine: (addons-465706) Ensuring networks are active...
	I0617 10:44:28.950972  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:28.951585  120744 main.go:141] libmachine: (addons-465706) Ensuring network default is active
	I0617 10:44:28.951897  120744 main.go:141] libmachine: (addons-465706) Ensuring network mk-addons-465706 is active
	I0617 10:44:28.955554  120744 main.go:141] libmachine: (addons-465706) Getting domain xml...
	I0617 10:44:28.956178  120744 main.go:141] libmachine: (addons-465706) Creating domain...
	I0617 10:44:30.304315  120744 main.go:141] libmachine: (addons-465706) Waiting to get IP...
	I0617 10:44:30.305032  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:30.305530  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:30.305558  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:30.305478  120766 retry.go:31] will retry after 205.154739ms: waiting for machine to come up
	I0617 10:44:30.511772  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:30.512203  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:30.512237  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:30.512148  120766 retry.go:31] will retry after 373.675802ms: waiting for machine to come up
	I0617 10:44:30.887876  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:30.888324  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:30.888350  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:30.888289  120766 retry.go:31] will retry after 304.632968ms: waiting for machine to come up
	I0617 10:44:31.194758  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:31.195188  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:31.195214  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:31.195138  120766 retry.go:31] will retry after 440.608798ms: waiting for machine to come up
	I0617 10:44:31.637691  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:31.638085  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:31.638118  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:31.638065  120766 retry.go:31] will retry after 717.121475ms: waiting for machine to come up
	I0617 10:44:32.357058  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:32.357539  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:32.357567  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:32.357475  120766 retry.go:31] will retry after 575.962657ms: waiting for machine to come up
	I0617 10:44:32.936828  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:32.937257  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:32.937289  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:32.937200  120766 retry.go:31] will retry after 765.587119ms: waiting for machine to come up
	I0617 10:44:33.704859  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:33.705362  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:33.705418  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:33.705334  120766 retry.go:31] will retry after 983.377485ms: waiting for machine to come up
	I0617 10:44:34.690431  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:34.690787  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:34.690811  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:34.690736  120766 retry.go:31] will retry after 1.699511808s: waiting for machine to come up
	I0617 10:44:36.391533  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:36.391970  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:36.392013  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:36.391924  120766 retry.go:31] will retry after 2.204970783s: waiting for machine to come up
	I0617 10:44:38.598427  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:38.598765  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:38.598814  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:38.598742  120766 retry.go:31] will retry after 2.728575827s: waiting for machine to come up
	I0617 10:44:41.328631  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:41.328974  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:41.328997  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:41.328923  120766 retry.go:31] will retry after 2.416284504s: waiting for machine to come up
	I0617 10:44:43.747002  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:43.747523  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:43.747559  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:43.747445  120766 retry.go:31] will retry after 3.42194274s: waiting for machine to come up
	I0617 10:44:47.173064  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:47.173527  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find current IP address of domain addons-465706 in network mk-addons-465706
	I0617 10:44:47.173558  120744 main.go:141] libmachine: (addons-465706) DBG | I0617 10:44:47.173482  120766 retry.go:31] will retry after 4.529341226s: waiting for machine to come up
	I0617 10:44:51.707208  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:51.707707  120744 main.go:141] libmachine: (addons-465706) Found IP for machine: 192.168.39.165
	I0617 10:44:51.707729  120744 main.go:141] libmachine: (addons-465706) Reserving static IP address...
	I0617 10:44:51.707756  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has current primary IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:51.708099  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find host DHCP lease matching {name: "addons-465706", mac: "52:54:00:56:ab:02", ip: "192.168.39.165"} in network mk-addons-465706
	I0617 10:44:51.778052  120744 main.go:141] libmachine: (addons-465706) DBG | Getting to WaitForSSH function...
	I0617 10:44:51.778089  120744 main.go:141] libmachine: (addons-465706) Reserved static IP address: 192.168.39.165
	I0617 10:44:51.778116  120744 main.go:141] libmachine: (addons-465706) Waiting for SSH to be available...
	I0617 10:44:51.780684  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:51.781009  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706
	I0617 10:44:51.781029  120744 main.go:141] libmachine: (addons-465706) DBG | unable to find defined IP address of network mk-addons-465706 interface with MAC address 52:54:00:56:ab:02
	I0617 10:44:51.781194  120744 main.go:141] libmachine: (addons-465706) DBG | Using SSH client type: external
	I0617 10:44:51.781216  120744 main.go:141] libmachine: (addons-465706) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa (-rw-------)
	I0617 10:44:51.781278  120744 main.go:141] libmachine: (addons-465706) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 10:44:51.781304  120744 main.go:141] libmachine: (addons-465706) DBG | About to run SSH command:
	I0617 10:44:51.781328  120744 main.go:141] libmachine: (addons-465706) DBG | exit 0
	I0617 10:44:51.792008  120744 main.go:141] libmachine: (addons-465706) DBG | SSH cmd err, output: exit status 255: 
	I0617 10:44:51.792036  120744 main.go:141] libmachine: (addons-465706) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0617 10:44:51.792054  120744 main.go:141] libmachine: (addons-465706) DBG | command : exit 0
	I0617 10:44:51.792077  120744 main.go:141] libmachine: (addons-465706) DBG | err     : exit status 255
	I0617 10:44:51.792089  120744 main.go:141] libmachine: (addons-465706) DBG | output  : 
	I0617 10:44:54.793800  120744 main.go:141] libmachine: (addons-465706) DBG | Getting to WaitForSSH function...
	I0617 10:44:54.796150  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:54.796525  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:54.796554  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:54.796575  120744 main.go:141] libmachine: (addons-465706) DBG | Using SSH client type: external
	I0617 10:44:54.796610  120744 main.go:141] libmachine: (addons-465706) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa (-rw-------)
	I0617 10:44:54.796647  120744 main.go:141] libmachine: (addons-465706) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.165 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 10:44:54.796659  120744 main.go:141] libmachine: (addons-465706) DBG | About to run SSH command:
	I0617 10:44:54.796664  120744 main.go:141] libmachine: (addons-465706) DBG | exit 0
	I0617 10:44:54.919754  120744 main.go:141] libmachine: (addons-465706) DBG | SSH cmd err, output: <nil>: 
	I0617 10:44:54.920057  120744 main.go:141] libmachine: (addons-465706) KVM machine creation complete!
	I0617 10:44:54.920352  120744 main.go:141] libmachine: (addons-465706) Calling .GetConfigRaw
	I0617 10:44:54.920915  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:44:54.921154  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:44:54.921325  120744 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0617 10:44:54.921341  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:44:54.922475  120744 main.go:141] libmachine: Detecting operating system of created instance...
	I0617 10:44:54.922487  120744 main.go:141] libmachine: Waiting for SSH to be available...
	I0617 10:44:54.922493  120744 main.go:141] libmachine: Getting to WaitForSSH function...
	I0617 10:44:54.922499  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:54.924743  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:54.925084  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:54.925110  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:54.925285  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:54.925451  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:54.925599  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:54.925695  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:54.925832  120744 main.go:141] libmachine: Using SSH client type: native
	I0617 10:44:54.926051  120744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0617 10:44:54.926071  120744 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0617 10:44:55.026866  120744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 10:44:55.026895  120744 main.go:141] libmachine: Detecting the provisioner...
	I0617 10:44:55.026905  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:55.029529  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.029843  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:55.029901  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.029995  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:55.030192  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.030420  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.030559  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:55.030727  120744 main.go:141] libmachine: Using SSH client type: native
	I0617 10:44:55.030943  120744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0617 10:44:55.030956  120744 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0617 10:44:55.132335  120744 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0617 10:44:55.132405  120744 main.go:141] libmachine: found compatible host: buildroot
	I0617 10:44:55.132412  120744 main.go:141] libmachine: Provisioning with buildroot...
	I0617 10:44:55.132420  120744 main.go:141] libmachine: (addons-465706) Calling .GetMachineName
	I0617 10:44:55.132687  120744 buildroot.go:166] provisioning hostname "addons-465706"
	I0617 10:44:55.132712  120744 main.go:141] libmachine: (addons-465706) Calling .GetMachineName
	I0617 10:44:55.132897  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:55.135736  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.136157  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:55.136184  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.136328  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:55.136505  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.136680  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.136817  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:55.136986  120744 main.go:141] libmachine: Using SSH client type: native
	I0617 10:44:55.137151  120744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0617 10:44:55.137164  120744 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-465706 && echo "addons-465706" | sudo tee /etc/hostname
	I0617 10:44:55.253835  120744 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-465706
	
	I0617 10:44:55.253865  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:55.256633  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.257047  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:55.257078  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.257217  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:55.257430  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.257605  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.257768  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:55.257953  120744 main.go:141] libmachine: Using SSH client type: native
	I0617 10:44:55.258139  120744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0617 10:44:55.258162  120744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-465706' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-465706/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-465706' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 10:44:55.368762  120744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 10:44:55.368799  120744 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 10:44:55.368825  120744 buildroot.go:174] setting up certificates
	I0617 10:44:55.368842  120744 provision.go:84] configureAuth start
	I0617 10:44:55.368854  120744 main.go:141] libmachine: (addons-465706) Calling .GetMachineName
	I0617 10:44:55.369138  120744 main.go:141] libmachine: (addons-465706) Calling .GetIP
	I0617 10:44:55.371766  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.372132  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:55.372160  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.372255  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:55.374409  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.374814  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:55.374841  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.374992  120744 provision.go:143] copyHostCerts
	I0617 10:44:55.375090  120744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 10:44:55.375233  120744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 10:44:55.375331  120744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 10:44:55.375400  120744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.addons-465706 san=[127.0.0.1 192.168.39.165 addons-465706 localhost minikube]
	I0617 10:44:55.533485  120744 provision.go:177] copyRemoteCerts
	I0617 10:44:55.533547  120744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 10:44:55.533575  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:55.536165  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.536487  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:55.536507  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.536709  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:55.536899  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.537011  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:55.537176  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:44:55.617634  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 10:44:55.641398  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0617 10:44:55.665130  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 10:44:55.688524  120744 provision.go:87] duration metric: took 319.667768ms to configureAuth
	I0617 10:44:55.688551  120744 buildroot.go:189] setting minikube options for container-runtime
	I0617 10:44:55.688736  120744 config.go:182] Loaded profile config "addons-465706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 10:44:55.688836  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:55.691442  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.691847  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:55.691871  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.692083  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:55.692275  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.692468  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.692569  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:55.692717  120744 main.go:141] libmachine: Using SSH client type: native
	I0617 10:44:55.692914  120744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0617 10:44:55.692930  120744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 10:44:55.958144  120744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 10:44:55.958181  120744 main.go:141] libmachine: Checking connection to Docker...
	I0617 10:44:55.958192  120744 main.go:141] libmachine: (addons-465706) Calling .GetURL
	I0617 10:44:55.959665  120744 main.go:141] libmachine: (addons-465706) DBG | Using libvirt version 6000000
	I0617 10:44:55.961978  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.962303  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:55.962332  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.962499  120744 main.go:141] libmachine: Docker is up and running!
	I0617 10:44:55.962518  120744 main.go:141] libmachine: Reticulating splines...
	I0617 10:44:55.962528  120744 client.go:171] duration metric: took 27.923717949s to LocalClient.Create
	I0617 10:44:55.962556  120744 start.go:167] duration metric: took 27.923781269s to libmachine.API.Create "addons-465706"
	I0617 10:44:55.962649  120744 start.go:293] postStartSetup for "addons-465706" (driver="kvm2")
	I0617 10:44:55.962664  120744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 10:44:55.962691  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:44:55.962982  120744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 10:44:55.963011  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:55.965590  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.965916  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:55.965942  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:55.966140  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:55.966329  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:55.966481  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:55.966621  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:44:56.049906  120744 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 10:44:56.054363  120744 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 10:44:56.054388  120744 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 10:44:56.054468  120744 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 10:44:56.054494  120744 start.go:296] duration metric: took 91.836115ms for postStartSetup
	I0617 10:44:56.054540  120744 main.go:141] libmachine: (addons-465706) Calling .GetConfigRaw
	I0617 10:44:56.055139  120744 main.go:141] libmachine: (addons-465706) Calling .GetIP
	I0617 10:44:56.057965  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.058201  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:56.058228  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.058472  120744 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/config.json ...
	I0617 10:44:56.058710  120744 start.go:128] duration metric: took 28.037606315s to createHost
	I0617 10:44:56.058746  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:56.061067  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.061403  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:56.061432  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.061560  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:56.061764  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:56.061912  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:56.062055  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:56.062239  120744 main.go:141] libmachine: Using SSH client type: native
	I0617 10:44:56.062406  120744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0617 10:44:56.062420  120744 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 10:44:56.164239  120744 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718621096.143697356
	
	I0617 10:44:56.164268  120744 fix.go:216] guest clock: 1718621096.143697356
	I0617 10:44:56.164279  120744 fix.go:229] Guest: 2024-06-17 10:44:56.143697356 +0000 UTC Remote: 2024-06-17 10:44:56.058725836 +0000 UTC m=+28.138871245 (delta=84.97152ms)
	I0617 10:44:56.164328  120744 fix.go:200] guest clock delta is within tolerance: 84.97152ms
	I0617 10:44:56.164337  120744 start.go:83] releasing machines lock for "addons-465706", held for 28.143318845s
	I0617 10:44:56.164366  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:44:56.164690  120744 main.go:141] libmachine: (addons-465706) Calling .GetIP
	I0617 10:44:56.167163  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.167526  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:56.167557  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.167729  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:44:56.168332  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:44:56.168517  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:44:56.168611  120744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 10:44:56.168654  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:56.168887  120744 ssh_runner.go:195] Run: cat /version.json
	I0617 10:44:56.168913  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:44:56.171110  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.171321  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:56.171341  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.171375  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.171546  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:56.171721  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:56.171800  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:56.171822  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:56.171901  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:56.171991  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:44:56.172079  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:44:56.172145  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:44:56.172265  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:44:56.172390  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:44:56.268437  120744 ssh_runner.go:195] Run: systemctl --version
	I0617 10:44:56.274781  120744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 10:44:57.046751  120744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 10:44:57.052913  120744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 10:44:57.052990  120744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 10:44:57.069060  120744 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 10:44:57.069087  120744 start.go:494] detecting cgroup driver to use...
	I0617 10:44:57.069159  120744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 10:44:57.087645  120744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 10:44:57.101481  120744 docker.go:217] disabling cri-docker service (if available) ...
	I0617 10:44:57.101553  120744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 10:44:57.115242  120744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 10:44:57.128420  120744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 10:44:57.253200  120744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 10:44:57.393662  120744 docker.go:233] disabling docker service ...
	I0617 10:44:57.393755  120744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 10:44:57.408156  120744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 10:44:57.421104  120744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 10:44:57.563098  120744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 10:44:57.682462  120744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 10:44:57.696551  120744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 10:44:57.714563  120744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 10:44:57.714625  120744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 10:44:57.724700  120744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 10:44:57.724764  120744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 10:44:57.735224  120744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 10:44:57.745962  120744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 10:44:57.757360  120744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 10:44:57.768601  120744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 10:44:57.779979  120744 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 10:44:57.796928  120744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 10:44:57.807779  120744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 10:44:57.817578  120744 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 10:44:57.817642  120744 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 10:44:57.832079  120744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 10:44:57.841788  120744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 10:44:57.956248  120744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 10:44:58.097349  120744 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 10:44:58.097433  120744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 10:44:58.102260  120744 start.go:562] Will wait 60s for crictl version
	I0617 10:44:58.102312  120744 ssh_runner.go:195] Run: which crictl
	I0617 10:44:58.106040  120744 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 10:44:58.148483  120744 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 10:44:58.148590  120744 ssh_runner.go:195] Run: crio --version
	I0617 10:44:58.176834  120744 ssh_runner.go:195] Run: crio --version
	I0617 10:44:58.205310  120744 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 10:44:58.206553  120744 main.go:141] libmachine: (addons-465706) Calling .GetIP
	I0617 10:44:58.209081  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:58.209439  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:44:58.209461  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:44:58.209697  120744 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 10:44:58.213798  120744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 10:44:58.226993  120744 kubeadm.go:877] updating cluster {Name:addons-465706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-465706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 10:44:58.227111  120744 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 10:44:58.227155  120744 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 10:44:58.260394  120744 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 10:44:58.260462  120744 ssh_runner.go:195] Run: which lz4
	I0617 10:44:58.264641  120744 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 10:44:58.268916  120744 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 10:44:58.268958  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0617 10:44:59.558872  120744 crio.go:462] duration metric: took 1.294255889s to copy over tarball
	I0617 10:44:59.558957  120744 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 10:45:01.763271  120744 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.204280076s)
	I0617 10:45:01.763309  120744 crio.go:469] duration metric: took 2.204402067s to extract the tarball
	I0617 10:45:01.763318  120744 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 10:45:01.800889  120744 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 10:45:01.846155  120744 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 10:45:01.846179  120744 cache_images.go:84] Images are preloaded, skipping loading
	I0617 10:45:01.846187  120744 kubeadm.go:928] updating node { 192.168.39.165 8443 v1.30.1 crio true true} ...
	I0617 10:45:01.846322  120744 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-465706 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-465706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 10:45:01.846407  120744 ssh_runner.go:195] Run: crio config
	I0617 10:45:01.889301  120744 cni.go:84] Creating CNI manager for ""
	I0617 10:45:01.889321  120744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 10:45:01.889329  120744 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 10:45:01.889354  120744 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-465706 NodeName:addons-465706 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 10:45:01.889488  120744 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-465706"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 10:45:01.889547  120744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 10:45:01.899332  120744 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 10:45:01.899386  120744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 10:45:01.908503  120744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0617 10:45:01.924576  120744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 10:45:01.939875  120744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0617 10:45:01.955318  120744 ssh_runner.go:195] Run: grep 192.168.39.165	control-plane.minikube.internal$ /etc/hosts
	I0617 10:45:01.958964  120744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.165	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 10:45:01.970306  120744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 10:45:02.078869  120744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 10:45:02.095081  120744 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706 for IP: 192.168.39.165
	I0617 10:45:02.095101  120744 certs.go:194] generating shared ca certs ...
	I0617 10:45:02.095121  120744 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.095269  120744 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 10:45:02.166004  120744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt ...
	I0617 10:45:02.166030  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt: {Name:mk05ceef74d4e62a72ea6e2eabb3e54836b27d2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.166204  120744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key ...
	I0617 10:45:02.166220  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key: {Name:mk11edc4b54cd52f43e67d5f64d42e9343208d3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.166313  120744 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 10:45:02.235082  120744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt ...
	I0617 10:45:02.235106  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt: {Name:mk414317308fd29ba2839574d731c10f47cab583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.235260  120744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key ...
	I0617 10:45:02.235274  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key: {Name:mkb0fa198bc59f19bbe87709d3288e46a91894f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.235359  120744 certs.go:256] generating profile certs ...
	I0617 10:45:02.235416  120744 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.key
	I0617 10:45:02.235431  120744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt with IP's: []
	I0617 10:45:02.481958  120744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt ...
	I0617 10:45:02.481989  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: {Name:mk7c0a709e2e60ab172552160940e7190242fe69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.482144  120744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.key ...
	I0617 10:45:02.482158  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.key: {Name:mkf5662ff64783b6014f1a106cf9b260e3453f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.482220  120744 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.key.5bd6be71
	I0617 10:45:02.482239  120744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.crt.5bd6be71 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165]
	I0617 10:45:02.650228  120744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.crt.5bd6be71 ...
	I0617 10:45:02.650278  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.crt.5bd6be71: {Name:mk42301b652784175eb87b0efaaae0c04bf791cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.650446  120744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.key.5bd6be71 ...
	I0617 10:45:02.650460  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.key.5bd6be71: {Name:mka5edf10fca968b70b46b8f727dbbb6d8d96511 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.650527  120744 certs.go:381] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.crt.5bd6be71 -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.crt
	I0617 10:45:02.650595  120744 certs.go:385] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.key.5bd6be71 -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.key
	I0617 10:45:02.650639  120744 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/proxy-client.key
	I0617 10:45:02.650658  120744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/proxy-client.crt with IP's: []
	I0617 10:45:02.692572  120744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/proxy-client.crt ...
	I0617 10:45:02.692600  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/proxy-client.crt: {Name:mk908d43fb4d6a603e83e03920c7fc46fe3cf47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.692733  120744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/proxy-client.key ...
	I0617 10:45:02.692750  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/proxy-client.key: {Name:mkc706b1e3d2f91e03959ac5236a603305db9e4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:02.692910  120744 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 10:45:02.692946  120744 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 10:45:02.692969  120744 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 10:45:02.692990  120744 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 10:45:02.693603  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 10:45:02.720826  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 10:45:02.744656  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 10:45:02.772815  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 10:45:02.795374  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0617 10:45:02.819047  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 10:45:02.847757  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 10:45:02.872184  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 10:45:02.896804  120744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 10:45:02.920621  120744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 10:45:02.937579  120744 ssh_runner.go:195] Run: openssl version
	I0617 10:45:02.943712  120744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 10:45:02.955080  120744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 10:45:02.959807  120744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 10:45:03.079795  120744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 10:45:03.087035  120744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 10:45:03.098957  120744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 10:45:03.103417  120744 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0617 10:45:03.103497  120744 kubeadm.go:391] StartCluster: {Name:addons-465706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-465706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 10:45:03.103576  120744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 10:45:03.103623  120744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 10:45:03.147141  120744 cri.go:89] found id: ""
	I0617 10:45:03.147222  120744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0617 10:45:03.157784  120744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 10:45:03.168596  120744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 10:45:03.180831  120744 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 10:45:03.180855  120744 kubeadm.go:156] found existing configuration files:
	
	I0617 10:45:03.180901  120744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 10:45:03.190069  120744 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 10:45:03.190135  120744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 10:45:03.199654  120744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 10:45:03.209068  120744 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 10:45:03.209117  120744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 10:45:03.218464  120744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 10:45:03.227534  120744 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 10:45:03.227584  120744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 10:45:03.236887  120744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 10:45:03.245812  120744 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 10:45:03.245859  120744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 10:45:03.255195  120744 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 10:45:03.312146  120744 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0617 10:45:03.312238  120744 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 10:45:03.455955  120744 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 10:45:03.456084  120744 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 10:45:03.456197  120744 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 10:45:03.683404  120744 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 10:45:03.883911  120744 out.go:204]   - Generating certificates and keys ...
	I0617 10:45:03.884040  120744 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 10:45:03.884120  120744 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 10:45:03.884222  120744 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0617 10:45:03.913034  120744 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0617 10:45:04.148606  120744 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0617 10:45:04.227813  120744 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0617 10:45:04.293558  120744 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0617 10:45:04.293757  120744 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-465706 localhost] and IPs [192.168.39.165 127.0.0.1 ::1]
	I0617 10:45:04.399986  120744 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0617 10:45:04.400140  120744 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-465706 localhost] and IPs [192.168.39.165 127.0.0.1 ::1]
	I0617 10:45:04.771017  120744 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0617 10:45:04.877740  120744 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0617 10:45:05.021969  120744 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0617 10:45:05.022112  120744 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 10:45:05.396115  120744 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 10:45:05.602440  120744 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0617 10:45:05.745035  120744 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 10:45:05.968827  120744 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 10:45:06.278574  120744 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 10:45:06.279088  120744 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 10:45:06.281441  120744 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 10:45:06.283343  120744 out.go:204]   - Booting up control plane ...
	I0617 10:45:06.283434  120744 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 10:45:06.283509  120744 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 10:45:06.283574  120744 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 10:45:06.297749  120744 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 10:45:06.300613  120744 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 10:45:06.300806  120744 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 10:45:06.424104  120744 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0617 10:45:06.424223  120744 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0617 10:45:07.425125  120744 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001614224s
	I0617 10:45:07.425216  120744 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0617 10:45:12.426804  120744 kubeadm.go:309] [api-check] The API server is healthy after 5.001996276s
	I0617 10:45:12.442714  120744 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0617 10:45:12.454111  120744 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0617 10:45:12.478741  120744 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0617 10:45:12.479001  120744 kubeadm.go:309] [mark-control-plane] Marking the node addons-465706 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0617 10:45:12.489912  120744 kubeadm.go:309] [bootstrap-token] Using token: 9a03xm.uzy79wsae0xvy118
	I0617 10:45:12.491274  120744 out.go:204]   - Configuring RBAC rules ...
	I0617 10:45:12.491362  120744 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0617 10:45:12.499494  120744 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0617 10:45:12.505342  120744 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0617 10:45:12.508264  120744 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0617 10:45:12.511032  120744 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0617 10:45:12.513720  120744 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0617 10:45:12.831896  120744 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0617 10:45:13.266355  120744 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0617 10:45:13.836581  120744 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0617 10:45:13.836617  120744 kubeadm.go:309] 
	I0617 10:45:13.836688  120744 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0617 10:45:13.836702  120744 kubeadm.go:309] 
	I0617 10:45:13.836825  120744 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0617 10:45:13.836835  120744 kubeadm.go:309] 
	I0617 10:45:13.836879  120744 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0617 10:45:13.836980  120744 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0617 10:45:13.837069  120744 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0617 10:45:13.837079  120744 kubeadm.go:309] 
	I0617 10:45:13.837147  120744 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0617 10:45:13.837156  120744 kubeadm.go:309] 
	I0617 10:45:13.837210  120744 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0617 10:45:13.837220  120744 kubeadm.go:309] 
	I0617 10:45:13.837288  120744 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0617 10:45:13.837370  120744 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0617 10:45:13.837464  120744 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0617 10:45:13.837475  120744 kubeadm.go:309] 
	I0617 10:45:13.837576  120744 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0617 10:45:13.837672  120744 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0617 10:45:13.837681  120744 kubeadm.go:309] 
	I0617 10:45:13.837786  120744 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 9a03xm.uzy79wsae0xvy118 \
	I0617 10:45:13.837920  120744 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 \
	I0617 10:45:13.837959  120744 kubeadm.go:309] 	--control-plane 
	I0617 10:45:13.837969  120744 kubeadm.go:309] 
	I0617 10:45:13.838065  120744 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0617 10:45:13.838073  120744 kubeadm.go:309] 
	I0617 10:45:13.838180  120744 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 9a03xm.uzy79wsae0xvy118 \
	I0617 10:45:13.838304  120744 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 
	I0617 10:45:13.838529  120744 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 10:45:13.838601  120744 cni.go:84] Creating CNI manager for ""
	I0617 10:45:13.838618  120744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 10:45:13.840239  120744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 10:45:13.841661  120744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 10:45:13.852360  120744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 10:45:13.870460  120744 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 10:45:13.870520  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:13.870576  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-465706 minikube.k8s.io/updated_at=2024_06_17T10_45_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6 minikube.k8s.io/name=addons-465706 minikube.k8s.io/primary=true
	I0617 10:45:13.980447  120744 ops.go:34] apiserver oom_adj: -16
	I0617 10:45:13.980522  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:14.481462  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:14.980678  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:15.481152  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:15.980649  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:16.481345  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:16.980648  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:17.481591  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:17.980612  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:18.481523  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:18.981455  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:19.481527  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:19.980752  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:20.481015  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:20.981371  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:21.480673  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:21.980816  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:22.481586  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:22.980872  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:23.481189  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:23.980900  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:24.480683  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:24.980975  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:25.481022  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:25.981046  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:26.480991  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:26.981019  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:27.480820  120744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 10:45:27.581312  120744 kubeadm.go:1107] duration metric: took 13.710837272s to wait for elevateKubeSystemPrivileges
	W0617 10:45:27.581373  120744 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0617 10:45:27.581387  120744 kubeadm.go:393] duration metric: took 24.477895643s to StartCluster
	I0617 10:45:27.581413  120744 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:27.581583  120744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 10:45:27.581983  120744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:45:27.582247  120744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0617 10:45:27.582261  120744 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 10:45:27.584685  120744 out.go:177] * Verifying Kubernetes components...
	I0617 10:45:27.582339  120744 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0617 10:45:27.582479  120744 config.go:182] Loaded profile config "addons-465706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 10:45:27.585906  120744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 10:45:27.585918  120744 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-465706"
	I0617 10:45:27.585925  120744 addons.go:69] Setting ingress-dns=true in profile "addons-465706"
	I0617 10:45:27.585935  120744 addons.go:69] Setting yakd=true in profile "addons-465706"
	I0617 10:45:27.585957  120744 addons.go:234] Setting addon ingress-dns=true in "addons-465706"
	I0617 10:45:27.585963  120744 addons.go:69] Setting inspektor-gadget=true in profile "addons-465706"
	I0617 10:45:27.585971  120744 addons.go:69] Setting storage-provisioner=true in profile "addons-465706"
	I0617 10:45:27.585982  120744 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-465706"
	I0617 10:45:27.585990  120744 addons.go:69] Setting metrics-server=true in profile "addons-465706"
	I0617 10:45:27.585990  120744 addons.go:69] Setting default-storageclass=true in profile "addons-465706"
	I0617 10:45:27.585985  120744 addons.go:69] Setting gcp-auth=true in profile "addons-465706"
	I0617 10:45:27.586017  120744 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-465706"
	I0617 10:45:27.586020  120744 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-465706"
	I0617 10:45:27.586024  120744 addons.go:69] Setting ingress=true in profile "addons-465706"
	I0617 10:45:27.586027  120744 addons.go:69] Setting volcano=true in profile "addons-465706"
	I0617 10:45:27.586035  120744 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-465706"
	I0617 10:45:27.586041  120744 addons.go:234] Setting addon ingress=true in "addons-465706"
	I0617 10:45:27.585966  120744 addons.go:69] Setting registry=true in profile "addons-465706"
	I0617 10:45:27.586046  120744 addons.go:234] Setting addon volcano=true in "addons-465706"
	I0617 10:45:27.586050  120744 addons.go:69] Setting volumesnapshots=true in profile "addons-465706"
	I0617 10:45:27.586058  120744 addons.go:234] Setting addon registry=true in "addons-465706"
	I0617 10:45:27.586064  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.586067  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.586072  120744 addons.go:234] Setting addon volumesnapshots=true in "addons-465706"
	I0617 10:45:27.586087  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.586096  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.586111  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.585957  120744 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-465706"
	I0617 10:45:27.586213  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.586543  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.586556  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.586037  120744 mustload.go:65] Loading cluster: addons-465706
	I0617 10:45:27.586584  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.586590  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.586610  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.586675  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.586543  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.586721  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.586745  120744 config.go:182] Loaded profile config "addons-465706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 10:45:27.585908  120744 addons.go:69] Setting cloud-spanner=true in profile "addons-465706"
	I0617 10:45:27.586781  120744 addons.go:234] Setting addon cloud-spanner=true in "addons-465706"
	I0617 10:45:27.586012  120744 addons.go:234] Setting addon storage-provisioner=true in "addons-465706"
	I0617 10:45:27.585984  120744 addons.go:234] Setting addon inspektor-gadget=true in "addons-465706"
	I0617 10:45:27.585960  120744 addons.go:234] Setting addon yakd=true in "addons-465706"
	I0617 10:45:27.586012  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.586832  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.586018  120744 addons.go:69] Setting helm-tiller=true in profile "addons-465706"
	I0617 10:45:27.586865  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.586873  120744 addons.go:234] Setting addon helm-tiller=true in "addons-465706"
	I0617 10:45:27.586949  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.587071  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.587071  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.587087  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.587092  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.586013  120744 addons.go:234] Setting addon metrics-server=true in "addons-465706"
	I0617 10:45:27.587165  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.586544  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.587195  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.586541  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.587244  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.587265  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.586036  120744 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-465706"
	I0617 10:45:27.587293  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.587513  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.587532  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.587576  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.587615  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.587633  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.587807  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.587864  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.588656  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.589033  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.589065  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.606392  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0617 10:45:27.606409  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37481
	I0617 10:45:27.606756  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46713
	I0617 10:45:27.606518  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35287
	I0617 10:45:27.607184  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.607374  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.607526  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.607647  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.608153  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.608167  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.608179  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.608184  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.608260  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0617 10:45:27.608260  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.608553  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.608557  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.608744  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.608979  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.609231  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.609306  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.609327  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.609388  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.609431  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.609450  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.609466  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.609605  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.609626  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.609698  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.609912  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.619954  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.619994  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.620409  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.620460  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.621281  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.621329  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.623557  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.623615  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.623724  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.623773  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.619958  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.623808  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.627687  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46595
	I0617 10:45:27.628191  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.628745  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.628765  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.629249  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.629712  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.631582  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.631998  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.632047  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.655714  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43883
	I0617 10:45:27.656298  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.656842  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.656862  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.657305  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.657988  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.658018  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.660351  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I0617 10:45:27.661026  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.661725  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.661743  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.662160  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.662747  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.662773  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.663027  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34767
	I0617 10:45:27.663528  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.663956  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.663971  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.664337  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.664908  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.664951  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.666009  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40531
	I0617 10:45:27.666555  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.667073  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.667097  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.667562  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.667793  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.670988  120744 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-465706"
	I0617 10:45:27.671028  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.671267  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.671301  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.673896  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44161
	I0617 10:45:27.674706  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.674761  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42641
	I0617 10:45:27.675364  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.675382  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.675512  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.675971  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.676063  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.676080  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.676162  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.676494  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.676725  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.679238  120744 addons.go:234] Setting addon default-storageclass=true in "addons-465706"
	I0617 10:45:27.679281  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:27.679561  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.679821  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40973
	I0617 10:45:27.679966  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:27.679976  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:27.680354  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.680381  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.682261  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32973
	I0617 10:45:27.682391  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43593
	I0617 10:45:27.682494  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.682581  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:27.682611  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:27.682621  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:27.682632  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:27.682640  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:27.683022  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.683120  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:27.683142  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:27.683151  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:27.683223  120744 main.go:141] libmachine: () Calling .GetVersion
	W0617 10:45:27.683239  120744 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0617 10:45:27.683392  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38569
	I0617 10:45:27.683578  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.683603  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.683897  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.683907  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.683918  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.683925  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.683991  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I0617 10:45:27.684110  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.684263  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.684325  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.684518  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.684791  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.684805  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.684904  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.684929  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.685126  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.685157  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.685464  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.685484  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.685533  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42701
	I0617 10:45:27.685869  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.685930  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.685945  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.685946  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40807
	I0617 10:45:27.686064  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.686231  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.686325  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.686372  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.686724  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.686743  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.686880  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.686892  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.687001  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.687038  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.687510  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.687572  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.687698  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.689569  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.689626  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.691713  120744 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0617 10:45:27.690342  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.691224  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.691872  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.692697  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45069
	I0617 10:45:27.692853  120744 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0617 10:45:27.693044  120744 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0617 10:45:27.693066  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.693100  120744 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0617 10:45:27.694292  120744 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0617 10:45:27.695574  120744 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0617 10:45:27.695538  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.695545  120744 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0617 10:45:27.698734  120744 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0617 10:45:27.697297  120744 out.go:177]   - Using image docker.io/registry:2.8.3
	I0617 10:45:27.697362  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.697495  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0617 10:45:27.697506  120744 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0617 10:45:27.698105  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.698536  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.699877  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.700187  120744 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0617 10:45:27.700200  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0617 10:45:27.700217  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.700305  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.700327  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.701317  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.701360  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.701362  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39925
	I0617 10:45:27.702544  120744 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0617 10:45:27.702566  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.703174  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38193
	I0617 10:45:27.703199  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38513
	I0617 10:45:27.704263  120744 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0617 10:45:27.704275  120744 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0617 10:45:27.704296  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.705216  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.706668  120744 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0617 10:45:27.706681  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0617 10:45:27.705530  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.706699  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.705985  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.706028  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.706751  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.708208  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.708196  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.708354  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.708369  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.708505  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.708516  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.708963  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.709035  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.709086  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40993
	I0617 10:45:27.709621  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.709658  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.709709  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.709737  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.709759  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.709800  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.709836  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.710137  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.710158  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.710177  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.710205  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.710447  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.710479  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.710504  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.710669  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.712313  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.712330  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.712346  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.712504  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.712523  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.712645  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.712669  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.712707  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.712722  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.712786  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.713028  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.713067  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.713259  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.713475  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.713695  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.713738  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.713763  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.714276  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.714297  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.714438  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.714457  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.714675  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.714820  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.715221  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.715248  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.715897  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.715932  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.717406  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45003
	I0617 10:45:27.717835  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.717895  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0617 10:45:27.718011  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42685
	I0617 10:45:27.718284  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.718381  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46471
	I0617 10:45:27.718439  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.718514  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.718529  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.718934  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.718952  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.718987  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.719004  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.719024  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.719474  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.719501  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.719651  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.719866  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.720171  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.720366  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.721286  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.721600  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.721966  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.723814  120744 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	I0617 10:45:27.724889  120744 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0617 10:45:27.722983  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.724916  120744 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0617 10:45:27.724937  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.723935  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.724017  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.728391  120744 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 10:45:27.728431  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.729588  120744 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 10:45:27.729603  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 10:45:27.729604  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.729621  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.729627  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.725886  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:27.729664  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:27.728467  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40983
	I0617 10:45:27.731477  120744 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0617 10:45:27.729077  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.730608  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.732701  120744 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0617 10:45:27.732715  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0617 10:45:27.732733  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.732883  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.733108  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.733136  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.733322  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.733446  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.733462  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.733993  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.733949  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.734026  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.734276  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.734513  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.735070  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.735261  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.735776  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.736398  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40871
	I0617 10:45:27.736614  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.738113  120744 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0617 10:45:27.736975  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.737031  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.737584  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.739350  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.739371  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.740738  120744 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0617 10:45:27.739557  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.739955  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.741753  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.743089  120744 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0617 10:45:27.741992  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.742358  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.745794  120744 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0617 10:45:27.746894  120744 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0617 10:45:27.744853  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.745029  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.745407  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37519
	I0617 10:45:27.746011  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46855
	I0617 10:45:27.749466  120744 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0617 10:45:27.748431  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.749197  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.749984  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.750429  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41207
	I0617 10:45:27.752071  120744 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0617 10:45:27.751187  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.751245  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.751647  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.753053  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I0617 10:45:27.753363  120744 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0617 10:45:27.754493  120744 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 10:45:27.754508  120744 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 10:45:27.754522  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.753436  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.755945  120744 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0617 10:45:27.753450  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.753751  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:27.754024  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.754932  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.757110  120744 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0617 10:45:27.757119  120744 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0617 10:45:27.757133  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.757194  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.757286  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.757534  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.757747  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:27.757766  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:27.757826  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.758211  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.758214  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.758219  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:27.758469  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.758490  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:27.758809  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.758827  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.759004  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.759188  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.759319  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.759486  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.760785  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.760956  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.762385  120744 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0617 10:45:27.761330  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.761468  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:27.762471  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.763137  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.763500  120744 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0617 10:45:27.764612  120744 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0617 10:45:27.764628  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0617 10:45:27.763579  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.763698  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.764668  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.763743  120744 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 10:45:27.764686  120744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 10:45:27.764700  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.765807  120744 out.go:177]   - Using image docker.io/busybox:stable
	I0617 10:45:27.763938  120744 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0617 10:45:27.764640  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.764880  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.765827  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0617 10:45:27.766933  120744 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0617 10:45:27.766941  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.768081  120744 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0617 10:45:27.767129  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.768093  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0617 10:45:27.768112  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:27.767578  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.768171  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.768188  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.768215  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.768376  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.768545  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.768691  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.771405  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.771598  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.771781  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.771799  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.772002  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.772024  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.772039  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.772175  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.772190  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.772309  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.772350  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.772493  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.772513  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.772623  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:27.773521  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.773878  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:27.773898  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:27.774151  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:27.774302  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:27.774488  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:27.774594  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	W0617 10:45:27.805981  120744 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52944->192.168.39.165:22: read: connection reset by peer
	I0617 10:45:27.806036  120744 retry.go:31] will retry after 281.511115ms: ssh: handshake failed: read tcp 192.168.39.1:52944->192.168.39.165:22: read: connection reset by peer
	I0617 10:45:28.118610  120744 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0617 10:45:28.118646  120744 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0617 10:45:28.135215  120744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 10:45:28.135236  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0617 10:45:28.139363  120744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 10:45:28.139437  120744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0617 10:45:28.159781  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 10:45:28.174650  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0617 10:45:28.191572  120744 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0617 10:45:28.191602  120744 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0617 10:45:28.193944  120744 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0617 10:45:28.193982  120744 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0617 10:45:28.304689  120744 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0617 10:45:28.304726  120744 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0617 10:45:28.317938  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0617 10:45:28.327836  120744 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0617 10:45:28.327863  120744 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0617 10:45:28.328880  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0617 10:45:28.335771  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 10:45:28.348758  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0617 10:45:28.349860  120744 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0617 10:45:28.349876  120744 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0617 10:45:28.364020  120744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 10:45:28.364043  120744 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 10:45:28.381337  120744 node_ready.go:35] waiting up to 6m0s for node "addons-465706" to be "Ready" ...
	I0617 10:45:28.389046  120744 node_ready.go:49] node "addons-465706" has status "Ready":"True"
	I0617 10:45:28.389073  120744 node_ready.go:38] duration metric: took 7.677571ms for node "addons-465706" to be "Ready" ...
	I0617 10:45:28.389082  120744 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 10:45:28.406187  120744 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9sbdk" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:28.489151  120744 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0617 10:45:28.489175  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0617 10:45:28.507761  120744 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0617 10:45:28.507796  120744 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0617 10:45:28.511907  120744 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0617 10:45:28.511935  120744 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0617 10:45:28.517099  120744 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0617 10:45:28.517124  120744 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0617 10:45:28.531200  120744 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0617 10:45:28.531222  120744 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0617 10:45:28.535806  120744 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0617 10:45:28.535828  120744 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0617 10:45:28.557639  120744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 10:45:28.557663  120744 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 10:45:28.647333  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0617 10:45:28.691269  120744 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0617 10:45:28.691295  120744 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0617 10:45:28.705978  120744 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0617 10:45:28.706005  120744 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0617 10:45:28.741737  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 10:45:28.750823  120744 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0617 10:45:28.750846  120744 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0617 10:45:28.757868  120744 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0617 10:45:28.757894  120744 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0617 10:45:28.759033  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0617 10:45:28.761378  120744 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0617 10:45:28.761408  120744 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0617 10:45:28.868415  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0617 10:45:28.881735  120744 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0617 10:45:28.881771  120744 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0617 10:45:28.898382  120744 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0617 10:45:28.898409  120744 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0617 10:45:28.930274  120744 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0617 10:45:28.930295  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0617 10:45:28.949120  120744 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0617 10:45:28.949155  120744 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0617 10:45:29.087570  120744 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0617 10:45:29.087603  120744 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0617 10:45:29.099746  120744 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0617 10:45:29.099769  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0617 10:45:29.311191  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0617 10:45:29.375401  120744 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0617 10:45:29.375436  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0617 10:45:29.461092  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0617 10:45:29.497969  120744 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0617 10:45:29.498010  120744 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0617 10:45:29.688595  120744 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0617 10:45:29.688629  120744 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0617 10:45:29.834080  120744 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0617 10:45:29.834104  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0617 10:45:29.974245  120744 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0617 10:45:29.974277  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0617 10:45:30.074855  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0617 10:45:30.299271  120744 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0617 10:45:30.299299  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0617 10:45:30.413329  120744 pod_ready.go:102] pod "coredns-7db6d8ff4d-9sbdk" in "kube-system" namespace has status "Ready":"False"
	I0617 10:45:30.456893  120744 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.317416749s)
	I0617 10:45:30.456940  120744 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0617 10:45:30.456956  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.297132023s)
	I0617 10:45:30.457028  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:30.457047  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:30.457324  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:30.457343  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:30.457358  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:30.457367  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:30.457619  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:30.457625  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:30.457655  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:30.466868  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:30.466883  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:30.467164  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:30.467209  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:30.467221  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:30.590558  120744 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0617 10:45:30.590586  120744 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0617 10:45:30.978275  120744 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-465706" context rescaled to 1 replicas
	I0617 10:45:30.985515  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0617 10:45:32.598810  120744 pod_ready.go:102] pod "coredns-7db6d8ff4d-9sbdk" in "kube-system" namespace has status "Ready":"False"
	I0617 10:45:34.910815  120744 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0617 10:45:34.910867  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:34.914216  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:34.914693  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:34.914720  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:34.914964  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:34.915208  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:34.915405  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:34.915617  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:35.066955  120744 pod_ready.go:102] pod "coredns-7db6d8ff4d-9sbdk" in "kube-system" namespace has status "Ready":"False"
	I0617 10:45:35.611062  120744 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0617 10:45:35.854850  120744 addons.go:234] Setting addon gcp-auth=true in "addons-465706"
	I0617 10:45:35.854922  120744 host.go:66] Checking if "addons-465706" exists ...
	I0617 10:45:35.855245  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:35.855275  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:35.870956  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44387
	I0617 10:45:35.871495  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:35.871992  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:35.872019  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:35.872370  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:35.872949  120744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:45:35.872985  120744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:45:35.889103  120744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33555
	I0617 10:45:35.889635  120744 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:45:35.890135  120744 main.go:141] libmachine: Using API Version  1
	I0617 10:45:35.890162  120744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:45:35.890548  120744 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:45:35.890746  120744 main.go:141] libmachine: (addons-465706) Calling .GetState
	I0617 10:45:35.892256  120744 main.go:141] libmachine: (addons-465706) Calling .DriverName
	I0617 10:45:35.892528  120744 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0617 10:45:35.892554  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHHostname
	I0617 10:45:35.895733  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:35.896137  120744 main.go:141] libmachine: (addons-465706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ab:02", ip: ""} in network mk-addons-465706: {Iface:virbr1 ExpiryTime:2024-06-17 11:44:43 +0000 UTC Type:0 Mac:52:54:00:56:ab:02 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:addons-465706 Clientid:01:52:54:00:56:ab:02}
	I0617 10:45:35.896163  120744 main.go:141] libmachine: (addons-465706) DBG | domain addons-465706 has defined IP address 192.168.39.165 and MAC address 52:54:00:56:ab:02 in network mk-addons-465706
	I0617 10:45:35.896341  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHPort
	I0617 10:45:35.896547  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHKeyPath
	I0617 10:45:35.896714  120744 main.go:141] libmachine: (addons-465706) Calling .GetSSHUsername
	I0617 10:45:35.896866  120744 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/addons-465706/id_rsa Username:docker}
	I0617 10:45:35.913304  120744 pod_ready.go:92] pod "coredns-7db6d8ff4d-9sbdk" in "kube-system" namespace has status "Ready":"True"
	I0617 10:45:35.913330  120744 pod_ready.go:81] duration metric: took 7.507114128s for pod "coredns-7db6d8ff4d-9sbdk" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:35.913344  120744 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mdcv2" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.232000  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.057299612s)
	I0617 10:45:36.232054  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.914065757s)
	I0617 10:45:36.232065  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.903158489s)
	I0617 10:45:36.232098  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232061  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232110  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232118  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232135  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.896345286s)
	I0617 10:45:36.232159  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232167  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232097  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232184  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232233  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.88344245s)
	I0617 10:45:36.232293  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232310  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232363  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.490599897s)
	I0617 10:45:36.232395  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232407  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232421  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.232450  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.232458  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.232465  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232472  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232489  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.232512  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.232517  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.473423608s)
	I0617 10:45:36.232534  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232536  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.232544  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232559  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.232567  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232571  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.364116134s)
	I0617 10:45:36.232593  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232603  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232623  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.9213945s)
	I0617 10:45:36.232642  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232574  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232650  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.232701  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.232709  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.232772  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.771637568s)
	W0617 10:45:36.232803  120744 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0617 10:45:36.232846  120744 retry.go:31] will retry after 340.601669ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0617 10:45:36.232927  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.158035947s)
	I0617 10:45:36.232946  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.232954  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.233023  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.233039  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.233060  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.233069  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.233076  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.233082  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.233124  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.233130  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.233138  120744 addons.go:475] Verifying addon ingress=true in "addons-465706"
	I0617 10:45:36.236450  120744 out.go:177] * Verifying ingress addon...
	I0617 10:45:36.233776  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.233806  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.237923  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.233829  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.233846  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.233863  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.233879  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.233895  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.233944  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.232293  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.584929122s)
	I0617 10:45:36.235205  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.235239  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.236139  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.236168  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.236197  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.236214  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.236725  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.238005  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.238012  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.237999  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.238030  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.239508  120744 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-465706 service yakd-dashboard -n yakd-dashboard
	
	I0617 10:45:36.238061  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.238070  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.238075  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.238081  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.238087  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.238020  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.238088  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.238259  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.238292  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.238886  120744 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0617 10:45:36.240641  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.240661  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.240673  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.240699  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.240730  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.240739  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.240745  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.240756  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.240712  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.240783  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.240783  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.240794  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.241260  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.241262  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.241263  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.241272  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.241275  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.241285  120744 addons.go:475] Verifying addon metrics-server=true in "addons-465706"
	I0617 10:45:36.241299  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.241302  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.241305  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.241309  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.242753  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.242763  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.242770  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.242780  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.242795  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.242804  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.242806  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.242808  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.242812  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.242813  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.242822  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.242827  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.242833  120744 addons.go:475] Verifying addon registry=true in "addons-465706"
	I0617 10:45:36.244386  120744 out.go:177] * Verifying registry addon...
	I0617 10:45:36.243022  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.243025  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.243046  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.245678  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.246291  120744 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0617 10:45:36.303634  120744 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0617 10:45:36.303666  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:36.321826  120744 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0617 10:45:36.321865  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:36.338983  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:36.339003  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:36.339367  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:36.339393  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:36.339423  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:36.423176  120744 pod_ready.go:92] pod "coredns-7db6d8ff4d-mdcv2" in "kube-system" namespace has status "Ready":"True"
	I0617 10:45:36.423213  120744 pod_ready.go:81] duration metric: took 509.860574ms for pod "coredns-7db6d8ff4d-mdcv2" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.423227  120744 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-465706" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.428836  120744 pod_ready.go:92] pod "etcd-addons-465706" in "kube-system" namespace has status "Ready":"True"
	I0617 10:45:36.428867  120744 pod_ready.go:81] duration metric: took 5.631061ms for pod "etcd-addons-465706" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.428879  120744 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-465706" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.443592  120744 pod_ready.go:92] pod "kube-apiserver-addons-465706" in "kube-system" namespace has status "Ready":"True"
	I0617 10:45:36.443616  120744 pod_ready.go:81] duration metric: took 14.728663ms for pod "kube-apiserver-addons-465706" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.443626  120744 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-465706" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.448486  120744 pod_ready.go:92] pod "kube-controller-manager-addons-465706" in "kube-system" namespace has status "Ready":"True"
	I0617 10:45:36.448507  120744 pod_ready.go:81] duration metric: took 4.875331ms for pod "kube-controller-manager-addons-465706" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.448516  120744 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v55ch" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.574047  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0617 10:45:36.712751  120744 pod_ready.go:92] pod "kube-proxy-v55ch" in "kube-system" namespace has status "Ready":"True"
	I0617 10:45:36.712786  120744 pod_ready.go:81] duration metric: took 264.263656ms for pod "kube-proxy-v55ch" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.712797  120744 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-465706" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:36.745241  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:36.762725  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:37.110234  120744 pod_ready.go:92] pod "kube-scheduler-addons-465706" in "kube-system" namespace has status "Ready":"True"
	I0617 10:45:37.110257  120744 pod_ready.go:81] duration metric: took 397.453725ms for pod "kube-scheduler-addons-465706" in "kube-system" namespace to be "Ready" ...
	I0617 10:45:37.110266  120744 pod_ready.go:38] duration metric: took 8.721173099s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 10:45:37.110280  120744 api_server.go:52] waiting for apiserver process to appear ...
	I0617 10:45:37.110332  120744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 10:45:37.245281  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:37.251193  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:37.754537  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:37.755234  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:38.244321  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:38.258185  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:38.780940  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:38.817615  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:38.906171  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.920591698s)
	I0617 10:45:38.906185  120744 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.013630139s)
	I0617 10:45:38.906245  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:38.906260  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:38.907853  120744 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0617 10:45:38.906590  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:38.906637  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:38.908973  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:38.908986  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:38.908997  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:38.910168  120744 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0617 10:45:38.909280  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:38.909305  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:38.910215  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:38.910227  120744 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-465706"
	I0617 10:45:38.911367  120744 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0617 10:45:38.911383  120744 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0617 10:45:38.912563  120744 out.go:177] * Verifying csi-hostpath-driver addon...
	I0617 10:45:38.914185  120744 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0617 10:45:38.945943  120744 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0617 10:45:38.945969  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:39.012501  120744 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0617 10:45:39.012530  120744 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0617 10:45:39.201197  120744 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0617 10:45:39.201222  120744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0617 10:45:39.248211  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:39.252907  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:39.278358  120744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0617 10:45:39.306513  120744 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.196154115s)
	I0617 10:45:39.306556  120744 api_server.go:72] duration metric: took 11.724263153s to wait for apiserver process to appear ...
	I0617 10:45:39.306563  120744 api_server.go:88] waiting for apiserver healthz status ...
	I0617 10:45:39.306586  120744 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0617 10:45:39.306512  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.732412459s)
	I0617 10:45:39.306681  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:39.306696  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:39.307105  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:39.307144  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:39.307152  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:39.307162  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:39.307173  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:39.307500  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:39.307520  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:39.311676  120744 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I0617 10:45:39.312607  120744 api_server.go:141] control plane version: v1.30.1
	I0617 10:45:39.312638  120744 api_server.go:131] duration metric: took 6.068088ms to wait for apiserver health ...
	I0617 10:45:39.312646  120744 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 10:45:39.321129  120744 system_pods.go:59] 19 kube-system pods found
	I0617 10:45:39.321160  120744 system_pods.go:61] "coredns-7db6d8ff4d-9sbdk" [9dced1c6-3ebc-46f8-8333-f4d8ba492a28] Running
	I0617 10:45:39.321165  120744 system_pods.go:61] "coredns-7db6d8ff4d-mdcv2" [0a081c8c-6add-484d-8269-47fd5e1bfad4] Running
	I0617 10:45:39.321172  120744 system_pods.go:61] "csi-hostpath-attacher-0" [c3a12dde-1859-4807-90f3-4e9f15f0acee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0617 10:45:39.321179  120744 system_pods.go:61] "csi-hostpath-resizer-0" [c4f5227e-f05e-4caa-a70c-c6fa84a8e6f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0617 10:45:39.321186  120744 system_pods.go:61] "csi-hostpathplugin-2wtdq" [704705e9-4f4b-4176-be37-424df07e8f4a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0617 10:45:39.321190  120744 system_pods.go:61] "etcd-addons-465706" [04a0e18a-dd05-4e7c-a759-841095eaaab2] Running
	I0617 10:45:39.321195  120744 system_pods.go:61] "kube-apiserver-addons-465706" [667d8e02-7848-48e4-af03-2de8bc5c658a] Running
	I0617 10:45:39.321198  120744 system_pods.go:61] "kube-controller-manager-addons-465706" [9b7a2d70-e3bf-4427-b759-e638e9c8a6de] Running
	I0617 10:45:39.321205  120744 system_pods.go:61] "kube-ingress-dns-minikube" [5887752c-36aa-4a81-a049-587806fdceb7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0617 10:45:39.321211  120744 system_pods.go:61] "kube-proxy-v55ch" [fc268acf-6fc2-47f0-8a27-3909125a82fc] Running
	I0617 10:45:39.321216  120744 system_pods.go:61] "kube-scheduler-addons-465706" [11083503-dd02-46b8-a0fc-57a28057acaa] Running
	I0617 10:45:39.321223  120744 system_pods.go:61] "metrics-server-c59844bb4-n7wsl" [9cffe86c-6fa6-4955-a42c-234714e1bd11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 10:45:39.321230  120744 system_pods.go:61] "nvidia-device-plugin-daemonset-qmfbl" [6fa18993-49a4-4224-9ae5-23eebbfb150c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0617 10:45:39.321241  120744 system_pods.go:61] "registry-proxy-8jk6d" [8e3ec5f6-818e-4deb-a7b8-8c6c898c12a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0617 10:45:39.321250  120744 system_pods.go:61] "registry-zmgvf" [779a673e-bb16-4cb8-ba45-1f77abb09f84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0617 10:45:39.321263  120744 system_pods.go:61] "snapshot-controller-745499f584-s86dn" [597fa742-0125-4713-8630-8191b4941bb0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0617 10:45:39.321270  120744 system_pods.go:61] "snapshot-controller-745499f584-vl64l" [3b353623-6a33-4171-b47b-f89dbd7a4a9d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0617 10:45:39.321274  120744 system_pods.go:61] "storage-provisioner" [732fd3d9-47fc-45cb-a823-c926365c9ea0] Running
	I0617 10:45:39.321279  120744 system_pods.go:61] "tiller-deploy-6677d64bcd-c55qr" [b7ac1365-80b4-4f6b-956f-9c3579810596] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0617 10:45:39.321285  120744 system_pods.go:74] duration metric: took 8.634781ms to wait for pod list to return data ...
	I0617 10:45:39.321296  120744 default_sa.go:34] waiting for default service account to be created ...
	I0617 10:45:39.323853  120744 default_sa.go:45] found service account: "default"
	I0617 10:45:39.323876  120744 default_sa.go:55] duration metric: took 2.573824ms for default service account to be created ...
	I0617 10:45:39.323884  120744 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 10:45:39.332547  120744 system_pods.go:86] 19 kube-system pods found
	I0617 10:45:39.332570  120744 system_pods.go:89] "coredns-7db6d8ff4d-9sbdk" [9dced1c6-3ebc-46f8-8333-f4d8ba492a28] Running
	I0617 10:45:39.332575  120744 system_pods.go:89] "coredns-7db6d8ff4d-mdcv2" [0a081c8c-6add-484d-8269-47fd5e1bfad4] Running
	I0617 10:45:39.332583  120744 system_pods.go:89] "csi-hostpath-attacher-0" [c3a12dde-1859-4807-90f3-4e9f15f0acee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0617 10:45:39.332591  120744 system_pods.go:89] "csi-hostpath-resizer-0" [c4f5227e-f05e-4caa-a70c-c6fa84a8e6f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0617 10:45:39.332599  120744 system_pods.go:89] "csi-hostpathplugin-2wtdq" [704705e9-4f4b-4176-be37-424df07e8f4a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0617 10:45:39.332604  120744 system_pods.go:89] "etcd-addons-465706" [04a0e18a-dd05-4e7c-a759-841095eaaab2] Running
	I0617 10:45:39.332608  120744 system_pods.go:89] "kube-apiserver-addons-465706" [667d8e02-7848-48e4-af03-2de8bc5c658a] Running
	I0617 10:45:39.332613  120744 system_pods.go:89] "kube-controller-manager-addons-465706" [9b7a2d70-e3bf-4427-b759-e638e9c8a6de] Running
	I0617 10:45:39.332622  120744 system_pods.go:89] "kube-ingress-dns-minikube" [5887752c-36aa-4a81-a049-587806fdceb7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0617 10:45:39.332631  120744 system_pods.go:89] "kube-proxy-v55ch" [fc268acf-6fc2-47f0-8a27-3909125a82fc] Running
	I0617 10:45:39.332639  120744 system_pods.go:89] "kube-scheduler-addons-465706" [11083503-dd02-46b8-a0fc-57a28057acaa] Running
	I0617 10:45:39.332651  120744 system_pods.go:89] "metrics-server-c59844bb4-n7wsl" [9cffe86c-6fa6-4955-a42c-234714e1bd11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 10:45:39.332660  120744 system_pods.go:89] "nvidia-device-plugin-daemonset-qmfbl" [6fa18993-49a4-4224-9ae5-23eebbfb150c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0617 10:45:39.332668  120744 system_pods.go:89] "registry-proxy-8jk6d" [8e3ec5f6-818e-4deb-a7b8-8c6c898c12a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0617 10:45:39.332676  120744 system_pods.go:89] "registry-zmgvf" [779a673e-bb16-4cb8-ba45-1f77abb09f84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0617 10:45:39.332683  120744 system_pods.go:89] "snapshot-controller-745499f584-s86dn" [597fa742-0125-4713-8630-8191b4941bb0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0617 10:45:39.332692  120744 system_pods.go:89] "snapshot-controller-745499f584-vl64l" [3b353623-6a33-4171-b47b-f89dbd7a4a9d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0617 10:45:39.332697  120744 system_pods.go:89] "storage-provisioner" [732fd3d9-47fc-45cb-a823-c926365c9ea0] Running
	I0617 10:45:39.332704  120744 system_pods.go:89] "tiller-deploy-6677d64bcd-c55qr" [b7ac1365-80b4-4f6b-956f-9c3579810596] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0617 10:45:39.332711  120744 system_pods.go:126] duration metric: took 8.821766ms to wait for k8s-apps to be running ...
	I0617 10:45:39.332722  120744 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 10:45:39.332773  120744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 10:45:39.419796  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:39.746135  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:39.750925  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:39.930505  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:40.254218  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:40.255151  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:40.435816  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:40.578449  120744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.300044739s)
	I0617 10:45:40.578496  120744 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.245685483s)
	I0617 10:45:40.578518  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:40.578539  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:40.578525  120744 system_svc.go:56] duration metric: took 1.245798929s WaitForService to wait for kubelet
	I0617 10:45:40.578620  120744 kubeadm.go:576] duration metric: took 12.996323814s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 10:45:40.578646  120744 node_conditions.go:102] verifying NodePressure condition ...
	I0617 10:45:40.578875  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:40.578878  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:40.578897  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:40.578907  120744 main.go:141] libmachine: Making call to close driver server
	I0617 10:45:40.578915  120744 main.go:141] libmachine: (addons-465706) Calling .Close
	I0617 10:45:40.579153  120744 main.go:141] libmachine: Successfully made call to close driver server
	I0617 10:45:40.579169  120744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 10:45:40.579195  120744 main.go:141] libmachine: (addons-465706) DBG | Closing plugin on server side
	I0617 10:45:40.580467  120744 addons.go:475] Verifying addon gcp-auth=true in "addons-465706"
	I0617 10:45:40.582713  120744 out.go:177] * Verifying gcp-auth addon...
	I0617 10:45:40.584535  120744 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0617 10:45:40.595043  120744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 10:45:40.595067  120744 node_conditions.go:123] node cpu capacity is 2
	I0617 10:45:40.595080  120744 node_conditions.go:105] duration metric: took 16.423895ms to run NodePressure ...
	I0617 10:45:40.595091  120744 start.go:240] waiting for startup goroutines ...
	I0617 10:45:40.596381  120744 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0617 10:45:40.596400  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:40.746996  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:40.755391  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:40.919576  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:41.087986  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:41.245478  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:41.251010  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:41.419710  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:41.588018  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:41.745268  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:41.750599  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:41.919428  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:42.087959  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:42.244817  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:42.251030  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:42.420594  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:42.588722  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:42.745185  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:42.750963  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:42.919082  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:43.090420  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:43.245297  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:43.250963  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:43.420525  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:43.587881  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:43.744984  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:43.750459  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:43.920814  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:44.088150  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:44.245149  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:44.250553  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:44.420541  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:44.588154  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:44.745642  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:44.751385  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:44.920686  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:45.088757  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:45.246088  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:45.251808  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:45.424788  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:45.588619  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:45.745728  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:45.750648  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:45.927830  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:46.089022  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:46.245460  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:46.250764  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:46.419377  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:46.588713  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:46.745652  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:46.751559  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:46.922003  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:47.089443  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:47.246262  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:47.250134  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:47.420986  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:47.588394  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:47.745276  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:47.749743  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:47.920247  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:48.088415  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:48.245452  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:48.251399  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:48.420906  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:48.588829  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:48.745221  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:48.750315  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:48.920406  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:49.090129  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:49.245217  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:49.251068  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:49.419909  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:49.588516  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:49.745848  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:49.751641  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:49.920322  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:50.088549  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:50.247849  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:50.254279  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:50.420557  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:50.589023  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:50.745144  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:50.750538  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:50.921903  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:51.089143  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:51.244977  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:51.250514  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:51.420811  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:51.588053  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:51.745395  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:51.750617  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:51.919711  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:52.088848  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:52.246287  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:52.251099  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:52.420829  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:52.590439  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:52.745929  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:52.751771  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:52.919911  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:53.088664  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:53.246024  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:53.251207  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:53.420242  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:53.588439  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:53.751182  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:53.759432  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:53.921029  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:54.089264  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:54.245457  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:54.253983  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:54.420570  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:54.588387  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:54.745665  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:54.751413  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:54.920908  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:55.089573  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:55.245817  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:55.251478  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:55.420671  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:55.588784  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:55.746608  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:55.750569  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:55.919998  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:56.088655  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:56.245603  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:56.250692  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:56.419965  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:56.588660  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:56.745792  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:56.750584  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:56.919372  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:57.090670  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:57.848522  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:57.849837  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:57.862149  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:57.866170  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:57.866988  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:57.868506  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:57.920262  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:58.088215  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:58.246083  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:58.253074  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:58.420371  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:58.588698  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:58.745704  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:58.750355  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:58.920355  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:59.088646  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:59.245804  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:59.251436  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:59.420706  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:45:59.588185  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:45:59.746100  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:45:59.750272  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:45:59.924088  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:00.088791  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:00.245960  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:00.251206  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:00.419768  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:00.588311  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:00.745284  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:00.753881  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:00.920226  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:01.088812  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:01.245602  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:01.252054  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:01.420388  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:01.590292  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:01.745413  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:01.751873  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:01.921281  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:02.088667  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:02.246543  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:02.251106  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:02.420244  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:02.587711  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:02.745536  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:02.750817  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:02.919799  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:03.087774  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:03.245252  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:03.250034  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:03.420348  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:03.588045  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:03.745617  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:03.750574  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:03.927298  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:04.088947  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:04.245469  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:04.250305  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:04.420741  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:04.592162  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:04.745181  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:04.760134  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:04.920084  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:05.087970  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:05.244622  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:05.250511  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:05.423174  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:05.589121  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:05.745126  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:05.750225  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:05.919857  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:06.088491  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:06.245375  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:06.250340  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:06.420117  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:06.588987  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:06.744969  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:06.749848  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:06.919999  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:07.088297  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:07.245449  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:07.250308  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:07.419928  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:07.589552  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:07.746091  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:07.752463  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:08.231623  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:08.232835  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:08.244588  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:08.254257  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:08.420572  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:08.588312  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:08.745327  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:08.755955  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:08.919716  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:09.088468  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:09.245377  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:09.250822  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:09.419585  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:09.588230  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:09.745055  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:09.750303  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 10:46:09.923403  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:10.087479  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:10.244830  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:10.250737  120744 kapi.go:107] duration metric: took 34.004446177s to wait for kubernetes.io/minikube-addons=registry ...
	I0617 10:46:10.419994  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:10.588853  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:10.745525  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:10.921289  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:11.089076  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:11.245480  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:11.419897  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:11.588382  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:11.745388  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:11.922349  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:12.087776  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:12.630692  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:12.636814  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:12.637085  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:12.747530  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:12.920820  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:13.087992  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:13.244794  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:13.419442  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:13.588560  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:13.746067  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:13.920415  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:14.089723  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:14.245805  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:14.419969  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:14.588590  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:14.747017  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:14.919733  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:15.088898  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:15.244844  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:15.419237  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:15.587483  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:15.745989  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:15.919779  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:16.088654  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:16.245412  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:16.419775  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:16.588134  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:16.744836  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:16.919867  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:17.088504  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:17.245964  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:17.419026  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:17.588669  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:17.745275  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:17.919566  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:18.087286  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:18.245168  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:18.420696  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:18.589271  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:18.745362  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:18.921633  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:19.088407  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:19.262387  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:19.419611  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:19.588809  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:19.746005  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:19.920501  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:20.088242  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:20.244956  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:20.419686  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:20.589396  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:20.876480  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:20.920398  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:21.088331  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:21.245534  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:21.420472  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:21.592559  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:21.747539  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:21.920941  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:22.089005  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:22.246015  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:22.422077  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:22.590785  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:22.745427  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:22.920171  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:23.345785  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:23.347280  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:23.421981  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:23.588961  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:23.745460  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:23.920545  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:24.088638  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:24.257643  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:24.427384  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:24.588194  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:24.748151  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:24.921407  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:25.088462  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:25.246244  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:25.425359  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:25.588132  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:25.745051  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:25.919707  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:26.088137  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:26.245358  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:26.420166  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:26.589119  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:26.745001  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:26.919864  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:27.088670  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:27.249016  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:27.419178  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:27.590250  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:27.749646  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:27.933660  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:28.094384  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:28.251997  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:28.424586  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:28.588620  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:28.745524  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:28.926876  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:29.088683  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:29.245842  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:29.420773  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:29.588435  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:29.745528  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:29.920940  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:30.088834  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:30.245568  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:30.421345  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:30.589086  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:30.745589  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:30.920357  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:31.088262  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:31.248579  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:31.420722  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:31.588740  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:31.746274  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:31.920477  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:32.088760  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:32.245808  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:32.420075  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:32.588184  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:32.745770  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:32.920379  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:33.088306  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:33.245709  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:33.422929  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:33.595155  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:33.744674  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:33.921593  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:34.089262  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:34.246893  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:34.420381  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:34.588460  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:34.746064  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:35.347641  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:35.350853  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:35.353301  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:35.421345  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 10:46:35.590070  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:35.744834  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:35.919511  120744 kapi.go:107] duration metric: took 57.005322761s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0617 10:46:36.087862  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:36.244825  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:36.589150  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:36.744917  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:37.088430  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:37.245717  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:37.588088  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:37.745089  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:38.089396  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:38.245340  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:38.588588  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:38.745762  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:39.088272  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:39.245882  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:39.588947  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:39.746677  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:40.089050  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:40.245570  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:40.588509  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:40.745712  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:41.088722  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:41.251549  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:41.587963  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:41.745572  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:42.089545  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:42.259500  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:42.590132  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:42.748285  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:43.088288  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:43.245256  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:43.588255  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:43.745048  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:44.089641  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:44.246075  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:44.589470  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:44.745486  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:45.088580  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:45.246438  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:45.588579  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:45.746437  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:46.088005  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:46.244851  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:46.589091  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:46.745012  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:47.089085  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:47.245622  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:47.918107  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:47.918646  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:48.089531  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:48.249038  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:48.588642  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:48.745385  120744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 10:46:49.095036  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:49.245756  120744 kapi.go:107] duration metric: took 1m13.006866721s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0617 10:46:49.589315  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:50.089054  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:50.589897  120744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 10:46:51.089813  120744 kapi.go:107] duration metric: took 1m10.50527324s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0617 10:46:51.091333  120744 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-465706 cluster.
	I0617 10:46:51.092647  120744 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0617 10:46:51.093861  120744 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0617 10:46:51.095185  120744 out.go:177] * Enabled addons: default-storageclass, cloud-spanner, yakd, metrics-server, inspektor-gadget, storage-provisioner, ingress-dns, nvidia-device-plugin, helm-tiller, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0617 10:46:51.096494  120744 addons.go:510] duration metric: took 1m23.514153884s for enable addons: enabled=[default-storageclass cloud-spanner yakd metrics-server inspektor-gadget storage-provisioner ingress-dns nvidia-device-plugin helm-tiller storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0617 10:46:51.096557  120744 start.go:245] waiting for cluster config update ...
	I0617 10:46:51.096585  120744 start.go:254] writing updated cluster config ...
	I0617 10:46:51.096864  120744 ssh_runner.go:195] Run: rm -f paused
	I0617 10:46:51.153852  120744 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 10:46:51.155674  120744 out.go:177] * Done! kubectl is now configured to use "addons-465706" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.760128329Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718621568760106010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584717,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc504b7b-0875-4701-8f7f-c3f6a4eae3d4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.760786703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15abff42-bba8-4761-8fea-cf25a003b1e8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.760838577Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15abff42-bba8-4761-8fea-cf25a003b1e8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.761100098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7a40997f28deb0b9d10a18d6e0c7e688e0554d4a98815ae3feb8d5bb5af3cc,PodSandboxId:4f8b12de8ce5e47fea9ac59517ae2b82d235a5fa5a76daa6c220b3a0ea2da03c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718621370929279385,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-xb8zr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5e753cb-3461-4aa7-bf40-adb3f9b66766,},Annotations:map[string]string{io.kubernetes.container.hash: 2b750b2,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6693897cd633c5e476e3fd54f5e9b7f9f1269b19498f5326850dca97491457e,PodSandboxId:4777dab526a939281ee0b1b52bbdb623bfb0aa653f230ac78432661fd7fde11d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718621233103741289,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83bd573f-7cbc-4b39-a885-d2024b2fb1f1,},Annotations:map[string]string{io.kuberne
tes.container.hash: e078ea50,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842ae954918aa02a862aab751b1f0640b768c714cea915e49f47098fe8a23a19,PodSandboxId:9f883f3d665349c1ab9bffa09b7876d500563d48d88cd56b7f8c444bc170b3c0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718621230720726905,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-b25bd,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 426684ff-406b-40d7-a06f-5aab3179e257,},Annotations:map[string]string{io.kubernetes.container.hash: ca1e2563,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb02f1a32711f02bfa7db92ba295caa4c8d9d29515048c64d2de9e327609872,PodSandboxId:95b80d384248f070c9810fbe50f625238bf4c791081e65f75c436cac01df0981,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718621210183187820,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5dp97,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e3d518e3-abec-4d34-be04-6f0340b7a9df,},Annotations:map[string]string{io.kubernetes.container.hash: 6361f7db,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716370fa6ca1ba41d9fa95fd920747c901f7fce0c39bd84430da9f862b87ec37,PodSandboxId:cc64f2f39d7fa3d83604d26cd71eb937c19ddaefa6003412c3866dabef912ca5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171862
1172733298106,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-phsmj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 744b82c4-03d4-4e46-b250-37034c66f93a,},Annotations:map[string]string{io.kubernetes.container.hash: b436fb08,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d42b67d09bfc2e86be9a45094248a7a443132f92284cad0d34cff31f3978698,PodSandboxId:05df74ef20c0961cfaf19a0f1c656ae3348050a1a1e6a6621b322e26c05f75c7,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718621163940744340,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-n7wsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cffe86c-6fa6-4955-a42c-234714e1bd11,},Annotations:map[string]string{io.kubernetes.container.hash: 83c55851,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ffe2c0522573c6fb44e03297f5ade6ae49c1b346b92c335d0179921042fc45,PodSandboxId:b0190413947277d227cf0dcde0ba284345311e7eb8b3fd12d0d175745f57507d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718621135216023012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732fd3d9-47fc-45cb-a823-c926365c9ea0,},Annotations:map[string]string{io.kubernetes.container.hash: d5f76ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad34891558241a97f15e5950a6c122e58aaff1510e294c94dfd85978567a13c,PodSandboxId:9f901812e713fc1bfb057868942601f39882a33dc2afe8187835638a168546f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718621132850000633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mdcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a081c8c-6add-484d-8269-47fd5e1bfad4,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7acf46,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8182630f40dc3077251c143e1d0843b74fc2f903db0c6bb7de61a5003651ce42,PodSandb
oxId:2d37693a5c8de462b0bb438e1c00ced09f46009526fd55cbbda4e539453ad676,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718621130022674714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v55ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc268acf-6fc2-47f0-8a27-3909125a82fc,},Annotations:map[string]string{io.kubernetes.container.hash: ee7efe78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32aaf27877c21f1872f89199888d6e46c7c128e1968884607b91b1ba82c84a09,PodSandboxId:8d190137ae1f051d09c68252dfa4
b34d9f116032a0b1310c2acaf1ae81d93be3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718621107821861178,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16241773609d976ed01822e798b3e93e,},Annotations:map[string]string{io.kubernetes.container.hash: d7f020bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbbcc46101fca247086c67e958f8de3c1a294b6b24e57f2589442f78e8f1ea91,PodSandboxId:7b8b9405bb9d11bcfbac74d380678286bcd67c39321794eec7e9806ba87034e7,Metadata:&
ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718621107880412179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4380ac408e6ad735f7d32063d2d6cf11,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6981a9b7f93a47089761b31a184fb2e378c9384b3ff6a8fd6b36c028808740f0,PodSandboxId:cc2d5e9dd72320dac79fd5374f234bcbb66571bc5212b0ceb64d08c37fd9953c,Metadata:&ContainerMetadata
{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718621107802101954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0726e8bca9e46b8d63d78deadac8845c,},Annotations:map[string]string{io.kubernetes.container.hash: 8ca32538,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2d1cd8b31398e19d08dd55347ee59581d9b378824a6b55badfacfb07bd3e6a3,PodSandboxId:401af38e9ed0d6d613bd2f84e74232be5388d0ffc635f8e8bdf4509a0a33d6c5,Metadata:&ContainerMetadata{Name:kube-contro
ller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718621107809349904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e257f017a334b4976466298131eb526,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15abff42-bba8-4761-8fea-cf25a003b1e8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.798125606Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=106c3b5f-b412-4a5c-8028-8d627204de0c name=/runtime.v1.RuntimeService/Version
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.798194786Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=106c3b5f-b412-4a5c-8028-8d627204de0c name=/runtime.v1.RuntimeService/Version
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.799686342Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f07b956b-7385-4c22-a5af-874786a0d80c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.800880500Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718621568800857368,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584717,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f07b956b-7385-4c22-a5af-874786a0d80c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.801607804Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9672db21-4963-4a1d-a008-02a7dff6ca37 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.801678696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9672db21-4963-4a1d-a008-02a7dff6ca37 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.801933341Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7a40997f28deb0b9d10a18d6e0c7e688e0554d4a98815ae3feb8d5bb5af3cc,PodSandboxId:4f8b12de8ce5e47fea9ac59517ae2b82d235a5fa5a76daa6c220b3a0ea2da03c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718621370929279385,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-xb8zr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5e753cb-3461-4aa7-bf40-adb3f9b66766,},Annotations:map[string]string{io.kubernetes.container.hash: 2b750b2,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6693897cd633c5e476e3fd54f5e9b7f9f1269b19498f5326850dca97491457e,PodSandboxId:4777dab526a939281ee0b1b52bbdb623bfb0aa653f230ac78432661fd7fde11d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718621233103741289,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83bd573f-7cbc-4b39-a885-d2024b2fb1f1,},Annotations:map[string]string{io.kuberne
tes.container.hash: e078ea50,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842ae954918aa02a862aab751b1f0640b768c714cea915e49f47098fe8a23a19,PodSandboxId:9f883f3d665349c1ab9bffa09b7876d500563d48d88cd56b7f8c444bc170b3c0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718621230720726905,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-b25bd,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 426684ff-406b-40d7-a06f-5aab3179e257,},Annotations:map[string]string{io.kubernetes.container.hash: ca1e2563,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb02f1a32711f02bfa7db92ba295caa4c8d9d29515048c64d2de9e327609872,PodSandboxId:95b80d384248f070c9810fbe50f625238bf4c791081e65f75c436cac01df0981,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718621210183187820,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5dp97,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e3d518e3-abec-4d34-be04-6f0340b7a9df,},Annotations:map[string]string{io.kubernetes.container.hash: 6361f7db,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716370fa6ca1ba41d9fa95fd920747c901f7fce0c39bd84430da9f862b87ec37,PodSandboxId:cc64f2f39d7fa3d83604d26cd71eb937c19ddaefa6003412c3866dabef912ca5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171862
1172733298106,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-phsmj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 744b82c4-03d4-4e46-b250-37034c66f93a,},Annotations:map[string]string{io.kubernetes.container.hash: b436fb08,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d42b67d09bfc2e86be9a45094248a7a443132f92284cad0d34cff31f3978698,PodSandboxId:05df74ef20c0961cfaf19a0f1c656ae3348050a1a1e6a6621b322e26c05f75c7,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718621163940744340,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-n7wsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cffe86c-6fa6-4955-a42c-234714e1bd11,},Annotations:map[string]string{io.kubernetes.container.hash: 83c55851,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ffe2c0522573c6fb44e03297f5ade6ae49c1b346b92c335d0179921042fc45,PodSandboxId:b0190413947277d227cf0dcde0ba284345311e7eb8b3fd12d0d175745f57507d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718621135216023012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732fd3d9-47fc-45cb-a823-c926365c9ea0,},Annotations:map[string]string{io.kubernetes.container.hash: d5f76ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad34891558241a97f15e5950a6c122e58aaff1510e294c94dfd85978567a13c,PodSandboxId:9f901812e713fc1bfb057868942601f39882a33dc2afe8187835638a168546f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718621132850000633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mdcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a081c8c-6add-484d-8269-47fd5e1bfad4,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7acf46,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8182630f40dc3077251c143e1d0843b74fc2f903db0c6bb7de61a5003651ce42,PodSandb
oxId:2d37693a5c8de462b0bb438e1c00ced09f46009526fd55cbbda4e539453ad676,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718621130022674714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v55ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc268acf-6fc2-47f0-8a27-3909125a82fc,},Annotations:map[string]string{io.kubernetes.container.hash: ee7efe78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32aaf27877c21f1872f89199888d6e46c7c128e1968884607b91b1ba82c84a09,PodSandboxId:8d190137ae1f051d09c68252dfa4
b34d9f116032a0b1310c2acaf1ae81d93be3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718621107821861178,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16241773609d976ed01822e798b3e93e,},Annotations:map[string]string{io.kubernetes.container.hash: d7f020bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbbcc46101fca247086c67e958f8de3c1a294b6b24e57f2589442f78e8f1ea91,PodSandboxId:7b8b9405bb9d11bcfbac74d380678286bcd67c39321794eec7e9806ba87034e7,Metadata:&
ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718621107880412179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4380ac408e6ad735f7d32063d2d6cf11,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6981a9b7f93a47089761b31a184fb2e378c9384b3ff6a8fd6b36c028808740f0,PodSandboxId:cc2d5e9dd72320dac79fd5374f234bcbb66571bc5212b0ceb64d08c37fd9953c,Metadata:&ContainerMetadata
{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718621107802101954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0726e8bca9e46b8d63d78deadac8845c,},Annotations:map[string]string{io.kubernetes.container.hash: 8ca32538,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2d1cd8b31398e19d08dd55347ee59581d9b378824a6b55badfacfb07bd3e6a3,PodSandboxId:401af38e9ed0d6d613bd2f84e74232be5388d0ffc635f8e8bdf4509a0a33d6c5,Metadata:&ContainerMetadata{Name:kube-contro
ller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718621107809349904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e257f017a334b4976466298131eb526,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9672db21-4963-4a1d-a008-02a7dff6ca37 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.835355714Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82f168bc-9b16-46f8-a9c7-31c54017b7e9 name=/runtime.v1.RuntimeService/Version
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.835487749Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82f168bc-9b16-46f8-a9c7-31c54017b7e9 name=/runtime.v1.RuntimeService/Version
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.836694981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6eff0d09-f7a0-46fb-8840-e9e6761f460e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.837944800Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718621568837915834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584717,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6eff0d09-f7a0-46fb-8840-e9e6761f460e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.838692117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04471d9b-23ca-4ac9-bc3b-defe38206d06 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.838759529Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04471d9b-23ca-4ac9-bc3b-defe38206d06 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.839027769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7a40997f28deb0b9d10a18d6e0c7e688e0554d4a98815ae3feb8d5bb5af3cc,PodSandboxId:4f8b12de8ce5e47fea9ac59517ae2b82d235a5fa5a76daa6c220b3a0ea2da03c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718621370929279385,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-xb8zr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5e753cb-3461-4aa7-bf40-adb3f9b66766,},Annotations:map[string]string{io.kubernetes.container.hash: 2b750b2,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6693897cd633c5e476e3fd54f5e9b7f9f1269b19498f5326850dca97491457e,PodSandboxId:4777dab526a939281ee0b1b52bbdb623bfb0aa653f230ac78432661fd7fde11d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718621233103741289,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83bd573f-7cbc-4b39-a885-d2024b2fb1f1,},Annotations:map[string]string{io.kuberne
tes.container.hash: e078ea50,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842ae954918aa02a862aab751b1f0640b768c714cea915e49f47098fe8a23a19,PodSandboxId:9f883f3d665349c1ab9bffa09b7876d500563d48d88cd56b7f8c444bc170b3c0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718621230720726905,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-b25bd,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 426684ff-406b-40d7-a06f-5aab3179e257,},Annotations:map[string]string{io.kubernetes.container.hash: ca1e2563,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb02f1a32711f02bfa7db92ba295caa4c8d9d29515048c64d2de9e327609872,PodSandboxId:95b80d384248f070c9810fbe50f625238bf4c791081e65f75c436cac01df0981,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718621210183187820,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5dp97,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e3d518e3-abec-4d34-be04-6f0340b7a9df,},Annotations:map[string]string{io.kubernetes.container.hash: 6361f7db,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716370fa6ca1ba41d9fa95fd920747c901f7fce0c39bd84430da9f862b87ec37,PodSandboxId:cc64f2f39d7fa3d83604d26cd71eb937c19ddaefa6003412c3866dabef912ca5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171862
1172733298106,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-phsmj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 744b82c4-03d4-4e46-b250-37034c66f93a,},Annotations:map[string]string{io.kubernetes.container.hash: b436fb08,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d42b67d09bfc2e86be9a45094248a7a443132f92284cad0d34cff31f3978698,PodSandboxId:05df74ef20c0961cfaf19a0f1c656ae3348050a1a1e6a6621b322e26c05f75c7,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718621163940744340,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-n7wsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cffe86c-6fa6-4955-a42c-234714e1bd11,},Annotations:map[string]string{io.kubernetes.container.hash: 83c55851,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ffe2c0522573c6fb44e03297f5ade6ae49c1b346b92c335d0179921042fc45,PodSandboxId:b0190413947277d227cf0dcde0ba284345311e7eb8b3fd12d0d175745f57507d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718621135216023012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732fd3d9-47fc-45cb-a823-c926365c9ea0,},Annotations:map[string]string{io.kubernetes.container.hash: d5f76ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad34891558241a97f15e5950a6c122e58aaff1510e294c94dfd85978567a13c,PodSandboxId:9f901812e713fc1bfb057868942601f39882a33dc2afe8187835638a168546f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718621132850000633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mdcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a081c8c-6add-484d-8269-47fd5e1bfad4,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7acf46,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8182630f40dc3077251c143e1d0843b74fc2f903db0c6bb7de61a5003651ce42,PodSandb
oxId:2d37693a5c8de462b0bb438e1c00ced09f46009526fd55cbbda4e539453ad676,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718621130022674714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v55ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc268acf-6fc2-47f0-8a27-3909125a82fc,},Annotations:map[string]string{io.kubernetes.container.hash: ee7efe78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32aaf27877c21f1872f89199888d6e46c7c128e1968884607b91b1ba82c84a09,PodSandboxId:8d190137ae1f051d09c68252dfa4
b34d9f116032a0b1310c2acaf1ae81d93be3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718621107821861178,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16241773609d976ed01822e798b3e93e,},Annotations:map[string]string{io.kubernetes.container.hash: d7f020bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbbcc46101fca247086c67e958f8de3c1a294b6b24e57f2589442f78e8f1ea91,PodSandboxId:7b8b9405bb9d11bcfbac74d380678286bcd67c39321794eec7e9806ba87034e7,Metadata:&
ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718621107880412179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4380ac408e6ad735f7d32063d2d6cf11,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6981a9b7f93a47089761b31a184fb2e378c9384b3ff6a8fd6b36c028808740f0,PodSandboxId:cc2d5e9dd72320dac79fd5374f234bcbb66571bc5212b0ceb64d08c37fd9953c,Metadata:&ContainerMetadata
{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718621107802101954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0726e8bca9e46b8d63d78deadac8845c,},Annotations:map[string]string{io.kubernetes.container.hash: 8ca32538,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2d1cd8b31398e19d08dd55347ee59581d9b378824a6b55badfacfb07bd3e6a3,PodSandboxId:401af38e9ed0d6d613bd2f84e74232be5388d0ffc635f8e8bdf4509a0a33d6c5,Metadata:&ContainerMetadata{Name:kube-contro
ller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718621107809349904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e257f017a334b4976466298131eb526,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04471d9b-23ca-4ac9-bc3b-defe38206d06 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.876404028Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bbf34d6c-ea4b-4cf1-8498-74d271977ac7 name=/runtime.v1.RuntimeService/Version
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.876543680Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bbf34d6c-ea4b-4cf1-8498-74d271977ac7 name=/runtime.v1.RuntimeService/Version
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.877886786Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86eedf5c-6a29-4212-b75b-445772848b19 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.879219276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718621568879195555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584717,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86eedf5c-6a29-4212-b75b-445772848b19 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.879755829Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b8ad83e-adb2-48c0-9bb5-af1a5a077f9c name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.879836674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b8ad83e-adb2-48c0-9bb5-af1a5a077f9c name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 10:52:48 addons-465706 crio[683]: time="2024-06-17 10:52:48.880121677Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7a40997f28deb0b9d10a18d6e0c7e688e0554d4a98815ae3feb8d5bb5af3cc,PodSandboxId:4f8b12de8ce5e47fea9ac59517ae2b82d235a5fa5a76daa6c220b3a0ea2da03c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718621370929279385,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-xb8zr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5e753cb-3461-4aa7-bf40-adb3f9b66766,},Annotations:map[string]string{io.kubernetes.container.hash: 2b750b2,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6693897cd633c5e476e3fd54f5e9b7f9f1269b19498f5326850dca97491457e,PodSandboxId:4777dab526a939281ee0b1b52bbdb623bfb0aa653f230ac78432661fd7fde11d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718621233103741289,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83bd573f-7cbc-4b39-a885-d2024b2fb1f1,},Annotations:map[string]string{io.kuberne
tes.container.hash: e078ea50,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842ae954918aa02a862aab751b1f0640b768c714cea915e49f47098fe8a23a19,PodSandboxId:9f883f3d665349c1ab9bffa09b7876d500563d48d88cd56b7f8c444bc170b3c0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718621230720726905,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-b25bd,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 426684ff-406b-40d7-a06f-5aab3179e257,},Annotations:map[string]string{io.kubernetes.container.hash: ca1e2563,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb02f1a32711f02bfa7db92ba295caa4c8d9d29515048c64d2de9e327609872,PodSandboxId:95b80d384248f070c9810fbe50f625238bf4c791081e65f75c436cac01df0981,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718621210183187820,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5dp97,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e3d518e3-abec-4d34-be04-6f0340b7a9df,},Annotations:map[string]string{io.kubernetes.container.hash: 6361f7db,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716370fa6ca1ba41d9fa95fd920747c901f7fce0c39bd84430da9f862b87ec37,PodSandboxId:cc64f2f39d7fa3d83604d26cd71eb937c19ddaefa6003412c3866dabef912ca5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171862
1172733298106,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-phsmj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 744b82c4-03d4-4e46-b250-37034c66f93a,},Annotations:map[string]string{io.kubernetes.container.hash: b436fb08,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d42b67d09bfc2e86be9a45094248a7a443132f92284cad0d34cff31f3978698,PodSandboxId:05df74ef20c0961cfaf19a0f1c656ae3348050a1a1e6a6621b322e26c05f75c7,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718621163940744340,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-n7wsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cffe86c-6fa6-4955-a42c-234714e1bd11,},Annotations:map[string]string{io.kubernetes.container.hash: 83c55851,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ffe2c0522573c6fb44e03297f5ade6ae49c1b346b92c335d0179921042fc45,PodSandboxId:b0190413947277d227cf0dcde0ba284345311e7eb8b3fd12d0d175745f57507d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718621135216023012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732fd3d9-47fc-45cb-a823-c926365c9ea0,},Annotations:map[string]string{io.kubernetes.container.hash: d5f76ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad34891558241a97f15e5950a6c122e58aaff1510e294c94dfd85978567a13c,PodSandboxId:9f901812e713fc1bfb057868942601f39882a33dc2afe8187835638a168546f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718621132850000633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mdcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a081c8c-6add-484d-8269-47fd5e1bfad4,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7acf46,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8182630f40dc3077251c143e1d0843b74fc2f903db0c6bb7de61a5003651ce42,PodSandb
oxId:2d37693a5c8de462b0bb438e1c00ced09f46009526fd55cbbda4e539453ad676,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718621130022674714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v55ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc268acf-6fc2-47f0-8a27-3909125a82fc,},Annotations:map[string]string{io.kubernetes.container.hash: ee7efe78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32aaf27877c21f1872f89199888d6e46c7c128e1968884607b91b1ba82c84a09,PodSandboxId:8d190137ae1f051d09c68252dfa4
b34d9f116032a0b1310c2acaf1ae81d93be3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718621107821861178,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16241773609d976ed01822e798b3e93e,},Annotations:map[string]string{io.kubernetes.container.hash: d7f020bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbbcc46101fca247086c67e958f8de3c1a294b6b24e57f2589442f78e8f1ea91,PodSandboxId:7b8b9405bb9d11bcfbac74d380678286bcd67c39321794eec7e9806ba87034e7,Metadata:&
ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718621107880412179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4380ac408e6ad735f7d32063d2d6cf11,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6981a9b7f93a47089761b31a184fb2e378c9384b3ff6a8fd6b36c028808740f0,PodSandboxId:cc2d5e9dd72320dac79fd5374f234bcbb66571bc5212b0ceb64d08c37fd9953c,Metadata:&ContainerMetadata
{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718621107802101954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0726e8bca9e46b8d63d78deadac8845c,},Annotations:map[string]string{io.kubernetes.container.hash: 8ca32538,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2d1cd8b31398e19d08dd55347ee59581d9b378824a6b55badfacfb07bd3e6a3,PodSandboxId:401af38e9ed0d6d613bd2f84e74232be5388d0ffc635f8e8bdf4509a0a33d6c5,Metadata:&ContainerMetadata{Name:kube-contro
ller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718621107809349904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-465706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e257f017a334b4976466298131eb526,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b8ad83e-adb2-48c0-9bb5-af1a5a077f9c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	db7a40997f28d       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 3 minutes ago       Running             hello-world-app           0                   4f8b12de8ce5e       hello-world-app-86c47465fc-xb8zr
	a6693897cd633       docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa                         5 minutes ago       Running             nginx                     0                   4777dab526a93       nginx
	842ae954918aa       ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5                   5 minutes ago       Running             headlamp                  0                   9f883f3d66534       headlamp-7fc69f7444-b25bd
	ebb02f1a32711       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            5 minutes ago       Running             gcp-auth                  0                   95b80d384248f       gcp-auth-5db96cd9b4-5dp97
	716370fa6ca1b       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         6 minutes ago       Running             yakd                      0                   cc64f2f39d7fa       yakd-dashboard-5ddbf7d777-phsmj
	3d42b67d09bfc       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Running             metrics-server            0                   05df74ef20c09       metrics-server-c59844bb4-n7wsl
	d2ffe2c052257       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   b019041394727       storage-provisioner
	2ad3489155824       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   9f901812e713f       coredns-7db6d8ff4d-mdcv2
	8182630f40dc3       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                                        7 minutes ago       Running             kube-proxy                0                   2d37693a5c8de       kube-proxy-v55ch
	bbbcc46101fca       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                                        7 minutes ago       Running             kube-scheduler            0                   7b8b9405bb9d1       kube-scheduler-addons-465706
	32aaf27877c21       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        7 minutes ago       Running             etcd                      0                   8d190137ae1f0       etcd-addons-465706
	a2d1cd8b31398       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                                        7 minutes ago       Running             kube-controller-manager   0                   401af38e9ed0d       kube-controller-manager-addons-465706
	6981a9b7f93a4       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                                        7 minutes ago       Running             kube-apiserver            0                   cc2d5e9dd7232       kube-apiserver-addons-465706
	
	
	==> coredns [2ad34891558241a97f15e5950a6c122e58aaff1510e294c94dfd85978567a13c] <==
	[INFO] 10.244.0.7:57939 - 65457 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000186556s
	[INFO] 10.244.0.7:59607 - 5002 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000099123s
	[INFO] 10.244.0.7:59607 - 13192 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000097252s
	[INFO] 10.244.0.7:50990 - 52499 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000939s
	[INFO] 10.244.0.7:50990 - 11293 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00014566s
	[INFO] 10.244.0.7:59121 - 52178 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149737s
	[INFO] 10.244.0.7:59121 - 7916 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00018693s
	[INFO] 10.244.0.7:54503 - 54261 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000072943s
	[INFO] 10.244.0.7:54503 - 43248 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000047584s
	[INFO] 10.244.0.7:59875 - 43432 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000056005s
	[INFO] 10.244.0.7:59875 - 1962 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000026421s
	[INFO] 10.244.0.7:53981 - 4808 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000097339s
	[INFO] 10.244.0.7:53981 - 42191 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000044342s
	[INFO] 10.244.0.7:43141 - 45928 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000088526s
	[INFO] 10.244.0.7:43141 - 5739 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000037809s
	[INFO] 10.244.0.22:36445 - 45319 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000493033s
	[INFO] 10.244.0.22:49831 - 2691 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000676654s
	[INFO] 10.244.0.22:36321 - 27743 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000109519s
	[INFO] 10.244.0.22:52203 - 3039 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000194331s
	[INFO] 10.244.0.22:34232 - 5470 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085146s
	[INFO] 10.244.0.22:48517 - 13797 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000165313s
	[INFO] 10.244.0.22:54554 - 26482 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000386098s
	[INFO] 10.244.0.22:52675 - 15920 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000654891s
	[INFO] 10.244.0.25:58470 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000332679s
	[INFO] 10.244.0.25:32810 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145927s
	
	
	==> describe nodes <==
	Name:               addons-465706
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-465706
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=addons-465706
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T10_45_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-465706
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 10:45:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-465706
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 10:52:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 10:49:48 +0000   Mon, 17 Jun 2024 10:45:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 10:49:48 +0000   Mon, 17 Jun 2024 10:45:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 10:49:48 +0000   Mon, 17 Jun 2024 10:45:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 10:49:48 +0000   Mon, 17 Jun 2024 10:45:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    addons-465706
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb267b5b0fce4947a99307aa5b63540f
	  System UUID:                bb267b5b-0fce-4947-a993-07aa5b63540f
	  Boot ID:                    2808e5a8-39d2-42d5-a6bb-91485b8144f2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-xb8zr         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m21s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m42s
	  gcp-auth                    gcp-auth-5db96cd9b4-5dp97                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  headlamp                    headlamp-7fc69f7444-b25bd                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m46s
	  kube-system                 coredns-7db6d8ff4d-mdcv2                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m21s
	  kube-system                 etcd-addons-465706                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m36s
	  kube-system                 kube-apiserver-addons-465706             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 kube-controller-manager-addons-465706    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m37s
	  kube-system                 kube-proxy-v55ch                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m21s
	  kube-system                 kube-scheduler-addons-465706             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 metrics-server-c59844bb4-n7wsl           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         7m16s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-phsmj          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     7m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m16s                  kube-proxy       
	  Normal  Starting                 7m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m36s (x2 over 7m36s)  kubelet          Node addons-465706 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m36s (x2 over 7m36s)  kubelet          Node addons-465706 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m36s (x2 over 7m36s)  kubelet          Node addons-465706 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m35s                  kubelet          Node addons-465706 status is now: NodeReady
	  Normal  RegisteredNode           7m22s                  node-controller  Node addons-465706 event: Registered Node addons-465706 in Controller
	
	
	==> dmesg <==
	[  +0.075347] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.716820] systemd-fstab-generator[1493]: Ignoring "noauto" option for root device
	[  +0.158317] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.027028] kauditd_printk_skb: 92 callbacks suppressed
	[  +5.454368] kauditd_printk_skb: 115 callbacks suppressed
	[  +5.061087] kauditd_printk_skb: 110 callbacks suppressed
	[ +10.552428] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.616392] kauditd_printk_skb: 2 callbacks suppressed
	[Jun17 10:46] kauditd_printk_skb: 11 callbacks suppressed
	[ +13.190506] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.377689] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.069615] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.073777] kauditd_printk_skb: 49 callbacks suppressed
	[ +12.863963] kauditd_printk_skb: 3 callbacks suppressed
	[ +11.432781] kauditd_printk_skb: 52 callbacks suppressed
	[Jun17 10:47] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.052601] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.448552] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.301075] kauditd_printk_skb: 30 callbacks suppressed
	[ +23.987639] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.597970] kauditd_printk_skb: 13 callbacks suppressed
	[Jun17 10:48] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.318320] kauditd_printk_skb: 33 callbacks suppressed
	[Jun17 10:49] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.009145] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [32aaf27877c21f1872f89199888d6e46c7c128e1968884607b91b1ba82c84a09] <==
	{"level":"warn","ts":"2024-06-17T10:46:47.904865Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"385.442204ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15705899242528180052 > lease_revoke:<id:59f69025ccf8069e>","response":"size:28"}
	{"level":"info","ts":"2024-06-17T10:46:47.904939Z","caller":"traceutil/trace.go:171","msg":"trace[559885333] linearizableReadLoop","detail":"{readStateIndex:1183; appliedIndex:1182; }","duration":"327.904555ms","start":"2024-06-17T10:46:47.577023Z","end":"2024-06-17T10:46:47.904928Z","steps":["trace[559885333] 'read index received'  (duration: 27.261µs)","trace[559885333] 'applied index is now lower than readState.Index'  (duration: 327.876325ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-17T10:46:47.905075Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"328.040401ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-06-17T10:46:47.905111Z","caller":"traceutil/trace.go:171","msg":"trace[1092474629] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1147; }","duration":"328.104667ms","start":"2024-06-17T10:46:47.577Z","end":"2024-06-17T10:46:47.905105Z","steps":["trace[1092474629] 'agreement among raft nodes before linearized reading'  (duration: 327.985775ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T10:46:47.905132Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-17T10:46:47.576984Z","time spent":"328.142744ms","remote":"127.0.0.1:50784","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11476,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-06-17T10:46:47.905323Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.571997ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-06-17T10:46:47.90536Z","caller":"traceutil/trace.go:171","msg":"trace[99291614] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1147; }","duration":"171.62818ms","start":"2024-06-17T10:46:47.733727Z","end":"2024-06-17T10:46:47.905355Z","steps":["trace[99291614] 'agreement among raft nodes before linearized reading'  (duration: 171.531591ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T10:47:04.666521Z","caller":"traceutil/trace.go:171","msg":"trace[657713612] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1324; }","duration":"127.186757ms","start":"2024-06-17T10:47:04.53932Z","end":"2024-06-17T10:47:04.666507Z","steps":["trace[657713612] 'process raft request'  (duration: 127.031173ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T10:47:08.397759Z","caller":"traceutil/trace.go:171","msg":"trace[896851152] transaction","detail":"{read_only:false; response_revision:1374; number_of_response:1; }","duration":"134.548533ms","start":"2024-06-17T10:47:08.263196Z","end":"2024-06-17T10:47:08.397744Z","steps":["trace[896851152] 'process raft request'  (duration: 134.292671ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T10:47:10.612731Z","caller":"traceutil/trace.go:171","msg":"trace[1229839338] linearizableReadLoop","detail":"{readStateIndex:1426; appliedIndex:1425; }","duration":"385.036544ms","start":"2024-06-17T10:47:10.22768Z","end":"2024-06-17T10:47:10.612716Z","steps":["trace[1229839338] 'read index received'  (duration: 383.451884ms)","trace[1229839338] 'applied index is now lower than readState.Index'  (duration: 1.584143ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-17T10:47:10.612996Z","caller":"traceutil/trace.go:171","msg":"trace[2028357398] transaction","detail":"{read_only:false; response_revision:1382; number_of_response:1; }","duration":"386.263163ms","start":"2024-06-17T10:47:10.226723Z","end":"2024-06-17T10:47:10.612986Z","steps":["trace[2028357398] 'process raft request'  (duration: 384.447259ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T10:47:10.613103Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-17T10:47:10.226709Z","time spent":"386.336938ms","remote":"127.0.0.1:50676","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":782,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-768f948f8f-624bl.17d9c4e277f93d76\" mod_revision:1170 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-768f948f8f-624bl.17d9c4e277f93d76\" value_size:675 lease:6482527205673403666 >> failure:<request_range:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-768f948f8f-624bl.17d9c4e277f93d76\" > >"}
	{"level":"warn","ts":"2024-06-17T10:47:10.613362Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"385.673555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-17T10:47:10.613419Z","caller":"traceutil/trace.go:171","msg":"trace[1513413840] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; response_count:0; response_revision:1382; }","duration":"385.750025ms","start":"2024-06-17T10:47:10.227662Z","end":"2024-06-17T10:47:10.613412Z","steps":["trace[1513413840] 'agreement among raft nodes before linearized reading'  (duration: 385.668605ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T10:47:10.613536Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-17T10:47:10.227652Z","time spent":"385.876004ms","remote":"127.0.0.1:50994","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":30,"request content":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true "}
	{"level":"warn","ts":"2024-06-17T10:47:10.613704Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"293.892105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3966"}
	{"level":"info","ts":"2024-06-17T10:47:10.613742Z","caller":"traceutil/trace.go:171","msg":"trace[1081912122] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1382; }","duration":"294.007757ms","start":"2024-06-17T10:47:10.319728Z","end":"2024-06-17T10:47:10.613735Z","steps":["trace[1081912122] 'agreement among raft nodes before linearized reading'  (duration: 293.917336ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T10:47:10.613918Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.859643ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6370"}
	{"level":"info","ts":"2024-06-17T10:47:10.613956Z","caller":"traceutil/trace.go:171","msg":"trace[237034393] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1382; }","duration":"108.918493ms","start":"2024-06-17T10:47:10.505032Z","end":"2024-06-17T10:47:10.613951Z","steps":["trace[237034393] 'agreement among raft nodes before linearized reading'  (duration: 108.837542ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T10:47:20.812309Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.144198ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15705899242528180915 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/local-path-storage/helper-pod-create-pvc-f296beee-9e3b-4086-a049-00efb1334af0.17d9c4e4df90c375\" mod_revision:1242 > success:<request_delete_range:<key:\"/registry/events/local-path-storage/helper-pod-create-pvc-f296beee-9e3b-4086-a049-00efb1334af0.17d9c4e4df90c375\" > > failure:<request_range:<key:\"/registry/events/local-path-storage/helper-pod-create-pvc-f296beee-9e3b-4086-a049-00efb1334af0.17d9c4e4df90c375\" > >>","response":"size:18"}
	{"level":"info","ts":"2024-06-17T10:47:20.812832Z","caller":"traceutil/trace.go:171","msg":"trace[1618511398] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1455; }","duration":"194.030768ms","start":"2024-06-17T10:47:20.618785Z","end":"2024-06-17T10:47:20.812816Z","steps":["trace[1618511398] 'process raft request'  (duration: 86.100799ms)","trace[1618511398] 'compare'  (duration: 106.912298ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-17T10:47:46.844204Z","caller":"traceutil/trace.go:171","msg":"trace[977501368] transaction","detail":"{read_only:false; response_revision:1522; number_of_response:1; }","duration":"330.583817ms","start":"2024-06-17T10:47:46.513583Z","end":"2024-06-17T10:47:46.844167Z","steps":["trace[977501368] 'process raft request'  (duration: 330.466371ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T10:47:46.844417Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-17T10:47:46.513569Z","time spent":"330.740164ms","remote":"127.0.0.1:50872","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1509 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-06-17T10:47:46.848894Z","caller":"traceutil/trace.go:171","msg":"trace[480039458] transaction","detail":"{read_only:false; response_revision:1523; number_of_response:1; }","duration":"182.123885ms","start":"2024-06-17T10:47:46.666756Z","end":"2024-06-17T10:47:46.84888Z","steps":["trace[480039458] 'process raft request'  (duration: 182.060188ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T10:47:51.923721Z","caller":"traceutil/trace.go:171","msg":"trace[1659140217] transaction","detail":"{read_only:false; response_revision:1538; number_of_response:1; }","duration":"143.46256ms","start":"2024-06-17T10:47:51.780242Z","end":"2024-06-17T10:47:51.923704Z","steps":["trace[1659140217] 'process raft request'  (duration: 143.267563ms)"],"step_count":1}
	
	
	==> gcp-auth [ebb02f1a32711f02bfa7db92ba295caa4c8d9d29515048c64d2de9e327609872] <==
	2024/06/17 10:46:50 GCP Auth Webhook started!
	2024/06/17 10:46:57 Ready to marshal response ...
	2024/06/17 10:46:57 Ready to write response ...
	2024/06/17 10:46:57 Ready to marshal response ...
	2024/06/17 10:46:57 Ready to write response ...
	2024/06/17 10:46:57 Ready to marshal response ...
	2024/06/17 10:46:57 Ready to write response ...
	2024/06/17 10:47:02 Ready to marshal response ...
	2024/06/17 10:47:02 Ready to write response ...
	2024/06/17 10:47:03 Ready to marshal response ...
	2024/06/17 10:47:03 Ready to write response ...
	2024/06/17 10:47:03 Ready to marshal response ...
	2024/06/17 10:47:03 Ready to write response ...
	2024/06/17 10:47:03 Ready to marshal response ...
	2024/06/17 10:47:03 Ready to write response ...
	2024/06/17 10:47:07 Ready to marshal response ...
	2024/06/17 10:47:07 Ready to write response ...
	2024/06/17 10:47:14 Ready to marshal response ...
	2024/06/17 10:47:14 Ready to write response ...
	2024/06/17 10:47:42 Ready to marshal response ...
	2024/06/17 10:47:42 Ready to write response ...
	2024/06/17 10:48:06 Ready to marshal response ...
	2024/06/17 10:48:06 Ready to write response ...
	2024/06/17 10:49:28 Ready to marshal response ...
	2024/06/17 10:49:28 Ready to write response ...
	
	
	==> kernel <==
	 10:52:49 up 8 min,  0 users,  load average: 0.14, 0.60, 0.45
	Linux addons-465706 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6981a9b7f93a47089761b31a184fb2e378c9384b3ff6a8fd6b36c028808740f0] <==
	E0617 10:47:13.699273       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.93.131:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.93.131:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.93.131:443: connect: connection refused
	W0617 10:47:13.703824       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 10:47:13.704065       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0617 10:47:13.706878       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.93.131:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.93.131:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.93.131:443: connect: connection refused
	E0617 10:47:13.708194       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.93.131:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.93.131:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.93.131:443: connect: connection refused
	E0617 10:47:13.718754       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.93.131:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.93.131:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.93.131:443: connect: connection refused
	I0617 10:47:13.830660       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0617 10:47:30.950676       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0617 10:47:53.197343       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0617 10:48:04.047009       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0617 10:48:05.076506       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0617 10:48:22.722027       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0617 10:48:22.722087       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0617 10:48:22.798403       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0617 10:48:22.798588       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0617 10:48:22.839107       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0617 10:48:22.839206       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0617 10:48:22.874817       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0617 10:48:22.874868       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0617 10:48:23.821026       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0617 10:48:23.875207       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0617 10:48:23.879906       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0617 10:49:28.647296       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.157.242"}
	E0617 10:49:31.679293       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [a2d1cd8b31398e19d08dd55347ee59581d9b378824a6b55badfacfb07bd3e6a3] <==
	W0617 10:50:56.662589       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:50:56.662628       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 10:51:04.013124       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:51:04.013272       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 10:51:04.160349       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:51:04.160533       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 10:51:16.269338       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:51:16.269508       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 10:51:40.447785       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:51:40.448064       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 10:51:47.068608       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:51:47.068681       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 10:51:51.468378       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:51:51.468469       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 10:52:11.396512       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:52:11.396601       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 10:52:15.604856       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:52:15.604989       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 10:52:37.413899       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:52:37.413928       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 10:52:44.148137       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:52:44.148276       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 10:52:47.008736       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 10:52:47.008853       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0617 10:52:47.836916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="11.17µs"
	
	
	==> kube-proxy [8182630f40dc3077251c143e1d0843b74fc2f903db0c6bb7de61a5003651ce42] <==
	I0617 10:45:31.263957       1 server_linux.go:69] "Using iptables proxy"
	I0617 10:45:31.301600       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.165"]
	I0617 10:45:32.137792       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0617 10:45:32.137854       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0617 10:45:32.137892       1 server_linux.go:165] "Using iptables Proxier"
	I0617 10:45:32.219710       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 10:45:32.219930       1 server.go:872] "Version info" version="v1.30.1"
	I0617 10:45:32.219946       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 10:45:32.222800       1 config.go:192] "Starting service config controller"
	I0617 10:45:32.222831       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 10:45:32.222853       1 config.go:101] "Starting endpoint slice config controller"
	I0617 10:45:32.222857       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0617 10:45:32.224794       1 config.go:319] "Starting node config controller"
	I0617 10:45:32.224826       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 10:45:32.323275       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0617 10:45:32.323340       1 shared_informer.go:320] Caches are synced for service config
	I0617 10:45:32.325561       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bbbcc46101fca247086c67e958f8de3c1a294b6b24e57f2589442f78e8f1ea91] <==
	W0617 10:45:10.537775       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0617 10:45:10.537825       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 10:45:11.412156       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0617 10:45:11.412204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0617 10:45:11.445783       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0617 10:45:11.445830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0617 10:45:11.491296       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0617 10:45:11.491343       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0617 10:45:11.508083       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0617 10:45:11.508218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0617 10:45:11.533380       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0617 10:45:11.533520       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0617 10:45:11.543182       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0617 10:45:11.543253       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0617 10:45:11.584971       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0617 10:45:11.585123       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 10:45:11.603102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0617 10:45:11.604190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0617 10:45:11.642145       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0617 10:45:11.642190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0617 10:45:11.715680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0617 10:45:11.715728       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0617 10:45:11.767491       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0617 10:45:11.767536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0617 10:45:13.931004       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 17 10:49:35 addons-465706 kubelet[1274]: I0617 10:49:35.111807    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3c3fd4a-57bb-4d89-b6b9-57f6991a9c06" path="/var/lib/kubelet/pods/a3c3fd4a-57bb-4d89-b6b9-57f6991a9c06/volumes"
	Jun 17 10:50:13 addons-465706 kubelet[1274]: E0617 10:50:13.130048    1274 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 10:50:13 addons-465706 kubelet[1274]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 10:50:13 addons-465706 kubelet[1274]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 10:50:13 addons-465706 kubelet[1274]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 10:50:13 addons-465706 kubelet[1274]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 10:50:13 addons-465706 kubelet[1274]: I0617 10:50:13.996719    1274 scope.go:117] "RemoveContainer" containerID="ae66f61d519ad92c227ed1b1c7188404acf2183222e584e0da4aa8bf02cba66e"
	Jun 17 10:50:14 addons-465706 kubelet[1274]: I0617 10:50:14.025016    1274 scope.go:117] "RemoveContainer" containerID="3ed31360eee8f1ce4a6a26e862ae9d57a7be3e1813fd2124ed07b9983809c786"
	Jun 17 10:51:13 addons-465706 kubelet[1274]: E0617 10:51:13.128645    1274 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 10:51:13 addons-465706 kubelet[1274]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 10:51:13 addons-465706 kubelet[1274]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 10:51:13 addons-465706 kubelet[1274]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 10:51:13 addons-465706 kubelet[1274]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 10:52:13 addons-465706 kubelet[1274]: E0617 10:52:13.129794    1274 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 10:52:13 addons-465706 kubelet[1274]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 10:52:13 addons-465706 kubelet[1274]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 10:52:13 addons-465706 kubelet[1274]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 10:52:13 addons-465706 kubelet[1274]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 10:52:47 addons-465706 kubelet[1274]: I0617 10:52:47.872244    1274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-xb8zr" podStartSLOduration=198.014005813 podStartE2EDuration="3m19.872215719s" podCreationTimestamp="2024-06-17 10:49:28 +0000 UTC" firstStartedPulling="2024-06-17 10:49:29.046892768 +0000 UTC m=+256.058235493" lastFinishedPulling="2024-06-17 10:49:30.905102664 +0000 UTC m=+257.916445399" observedRunningTime="2024-06-17 10:49:31.866938106 +0000 UTC m=+258.878280851" watchObservedRunningTime="2024-06-17 10:52:47.872215719 +0000 UTC m=+454.883558464"
	Jun 17 10:52:49 addons-465706 kubelet[1274]: I0617 10:52:49.348999    1274 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2hpt\" (UniqueName: \"kubernetes.io/projected/9cffe86c-6fa6-4955-a42c-234714e1bd11-kube-api-access-r2hpt\") pod \"9cffe86c-6fa6-4955-a42c-234714e1bd11\" (UID: \"9cffe86c-6fa6-4955-a42c-234714e1bd11\") "
	Jun 17 10:52:49 addons-465706 kubelet[1274]: I0617 10:52:49.349079    1274 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9cffe86c-6fa6-4955-a42c-234714e1bd11-tmp-dir\") pod \"9cffe86c-6fa6-4955-a42c-234714e1bd11\" (UID: \"9cffe86c-6fa6-4955-a42c-234714e1bd11\") "
	Jun 17 10:52:49 addons-465706 kubelet[1274]: I0617 10:52:49.349504    1274 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cffe86c-6fa6-4955-a42c-234714e1bd11-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "9cffe86c-6fa6-4955-a42c-234714e1bd11" (UID: "9cffe86c-6fa6-4955-a42c-234714e1bd11"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jun 17 10:52:49 addons-465706 kubelet[1274]: I0617 10:52:49.353626    1274 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cffe86c-6fa6-4955-a42c-234714e1bd11-kube-api-access-r2hpt" (OuterVolumeSpecName: "kube-api-access-r2hpt") pod "9cffe86c-6fa6-4955-a42c-234714e1bd11" (UID: "9cffe86c-6fa6-4955-a42c-234714e1bd11"). InnerVolumeSpecName "kube-api-access-r2hpt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 17 10:52:49 addons-465706 kubelet[1274]: I0617 10:52:49.450102    1274 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-r2hpt\" (UniqueName: \"kubernetes.io/projected/9cffe86c-6fa6-4955-a42c-234714e1bd11-kube-api-access-r2hpt\") on node \"addons-465706\" DevicePath \"\""
	Jun 17 10:52:49 addons-465706 kubelet[1274]: I0617 10:52:49.450138    1274 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9cffe86c-6fa6-4955-a42c-234714e1bd11-tmp-dir\") on node \"addons-465706\" DevicePath \"\""
	
	
	==> storage-provisioner [d2ffe2c0522573c6fb44e03297f5ade6ae49c1b346b92c335d0179921042fc45] <==
	I0617 10:45:36.654583       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0617 10:45:36.697233       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0617 10:45:36.697372       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0617 10:45:36.735672       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0617 10:45:36.745870       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6b81ea56-8f03-416e-952e-e31581071fc3", APIVersion:"v1", ResourceVersion:"718", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-465706_ea7e3310-827f-4f80-9650-4d454d888578 became leader
	I0617 10:45:36.747608       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-465706_ea7e3310-827f-4f80-9650-4d454d888578!
	I0617 10:45:36.850940       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-465706_ea7e3310-827f-4f80-9650-4d454d888578!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-465706 -n addons-465706
helpers_test.go:261: (dbg) Run:  kubectl --context addons-465706 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (332.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-465706
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-465706: exit status 82 (2m0.454840112s)

                                                
                                                
-- stdout --
	* Stopping node "addons-465706"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-465706" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-465706
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-465706: exit status 11 (21.630881451s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.165:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-465706" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-465706
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-465706: exit status 11 (6.144189243s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.165:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-465706" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-465706
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-465706: exit status 11 (6.143936846s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.165:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-465706" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-303428 ssh pgrep buildkitd: exit status 1 (177.042444ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 image build -t localhost/my-image:functional-303428 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-303428 image build -t localhost/my-image:functional-303428 testdata/build --alsologtostderr: (2.393677825s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-303428 image build -t localhost/my-image:functional-303428 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 65fed7e97f9
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-303428
--> a78bb1f237f
Successfully tagged localhost/my-image:functional-303428
a78bb1f237fe47d614edf0f1957dd00496d74322c8fe2fa1f49b373a8cb037c2
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-303428 image build -t localhost/my-image:functional-303428 testdata/build --alsologtostderr:
I0617 10:59:31.461517  130022 out.go:291] Setting OutFile to fd 1 ...
I0617 10:59:31.461818  130022 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 10:59:31.461830  130022 out.go:304] Setting ErrFile to fd 2...
I0617 10:59:31.461834  130022 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 10:59:31.462021  130022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
I0617 10:59:31.462568  130022 config.go:182] Loaded profile config "functional-303428": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0617 10:59:31.463088  130022 config.go:182] Loaded profile config "functional-303428": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0617 10:59:31.463468  130022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0617 10:59:31.463522  130022 main.go:141] libmachine: Launching plugin server for driver kvm2
I0617 10:59:31.478306  130022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36447
I0617 10:59:31.478823  130022 main.go:141] libmachine: () Calling .GetVersion
I0617 10:59:31.479354  130022 main.go:141] libmachine: Using API Version  1
I0617 10:59:31.479375  130022 main.go:141] libmachine: () Calling .SetConfigRaw
I0617 10:59:31.479711  130022 main.go:141] libmachine: () Calling .GetMachineName
I0617 10:59:31.479948  130022 main.go:141] libmachine: (functional-303428) Calling .GetState
I0617 10:59:31.481781  130022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0617 10:59:31.481821  130022 main.go:141] libmachine: Launching plugin server for driver kvm2
I0617 10:59:31.496012  130022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33837
I0617 10:59:31.496415  130022 main.go:141] libmachine: () Calling .GetVersion
I0617 10:59:31.496982  130022 main.go:141] libmachine: Using API Version  1
I0617 10:59:31.497012  130022 main.go:141] libmachine: () Calling .SetConfigRaw
I0617 10:59:31.497330  130022 main.go:141] libmachine: () Calling .GetMachineName
I0617 10:59:31.497568  130022 main.go:141] libmachine: (functional-303428) Calling .DriverName
I0617 10:59:31.497756  130022 ssh_runner.go:195] Run: systemctl --version
I0617 10:59:31.497787  130022 main.go:141] libmachine: (functional-303428) Calling .GetSSHHostname
I0617 10:59:31.500830  130022 main.go:141] libmachine: (functional-303428) DBG | domain functional-303428 has defined MAC address 52:54:00:b7:2d:33 in network mk-functional-303428
I0617 10:59:31.501273  130022 main.go:141] libmachine: (functional-303428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:2d:33", ip: ""} in network mk-functional-303428: {Iface:virbr1 ExpiryTime:2024-06-17 11:56:50 +0000 UTC Type:0 Mac:52:54:00:b7:2d:33 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-303428 Clientid:01:52:54:00:b7:2d:33}
I0617 10:59:31.501295  130022 main.go:141] libmachine: (functional-303428) DBG | domain functional-303428 has defined IP address 192.168.39.25 and MAC address 52:54:00:b7:2d:33 in network mk-functional-303428
I0617 10:59:31.501486  130022 main.go:141] libmachine: (functional-303428) Calling .GetSSHPort
I0617 10:59:31.501667  130022 main.go:141] libmachine: (functional-303428) Calling .GetSSHKeyPath
I0617 10:59:31.501854  130022 main.go:141] libmachine: (functional-303428) Calling .GetSSHUsername
I0617 10:59:31.502011  130022 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/functional-303428/id_rsa Username:docker}
I0617 10:59:31.586983  130022 build_images.go:161] Building image from path: /tmp/build.3876877591.tar
I0617 10:59:31.587077  130022 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0617 10:59:31.606995  130022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3876877591.tar
I0617 10:59:31.614688  130022 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3876877591.tar: stat -c "%s %y" /var/lib/minikube/build/build.3876877591.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3876877591.tar': No such file or directory
I0617 10:59:31.614717  130022 ssh_runner.go:362] scp /tmp/build.3876877591.tar --> /var/lib/minikube/build/build.3876877591.tar (3072 bytes)
I0617 10:59:31.640143  130022 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3876877591
I0617 10:59:31.649983  130022 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3876877591 -xf /var/lib/minikube/build/build.3876877591.tar
I0617 10:59:31.664222  130022 crio.go:315] Building image: /var/lib/minikube/build/build.3876877591
I0617 10:59:31.664300  130022 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-303428 /var/lib/minikube/build/build.3876877591 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0617 10:59:33.753320  130022 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-303428 /var/lib/minikube/build/build.3876877591 --cgroup-manager=cgroupfs: (2.08897963s)
I0617 10:59:33.753394  130022 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3876877591
I0617 10:59:33.779523  130022 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3876877591.tar
I0617 10:59:33.804646  130022 build_images.go:217] Built localhost/my-image:functional-303428 from /tmp/build.3876877591.tar
I0617 10:59:33.804682  130022 build_images.go:133] succeeded building to: functional-303428
I0617 10:59:33.804687  130022 build_images.go:134] failed building to: 
I0617 10:59:33.804711  130022 main.go:141] libmachine: Making call to close driver server
I0617 10:59:33.804722  130022 main.go:141] libmachine: (functional-303428) Calling .Close
I0617 10:59:33.805070  130022 main.go:141] libmachine: (functional-303428) DBG | Closing plugin on server side
I0617 10:59:33.805108  130022 main.go:141] libmachine: Successfully made call to close driver server
I0617 10:59:33.805121  130022 main.go:141] libmachine: Making call to close connection to plugin binary
I0617 10:59:33.805131  130022 main.go:141] libmachine: Making call to close driver server
I0617 10:59:33.805153  130022 main.go:141] libmachine: (functional-303428) Calling .Close
I0617 10:59:33.805397  130022 main.go:141] libmachine: (functional-303428) DBG | Closing plugin on server side
I0617 10:59:33.805470  130022 main.go:141] libmachine: Successfully made call to close driver server
I0617 10:59:33.805495  130022 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 image ls
E0617 10:59:35.015561  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-303428 image ls: (2.31928408s)
functional_test.go:442: expected "localhost/my-image:functional-303428" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (4.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (10.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 image load --daemon gcr.io/google-containers/addon-resizer:functional-303428 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-303428 image load --daemon gcr.io/google-containers/addon-resizer:functional-303428 --alsologtostderr: (8.573659427s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-303428 image ls: (2.233205129s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-303428" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (10.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 node stop m02 -v=7 --alsologtostderr
E0617 11:04:38.361965  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
E0617 11:05:19.322497  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-064080 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.460949644s)

                                                
                                                
-- stdout --
	* Stopping node "ha-064080-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:04:18.803010  134509 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:04:18.803161  134509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:04:18.803172  134509 out.go:304] Setting ErrFile to fd 2...
	I0617 11:04:18.803177  134509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:04:18.803327  134509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:04:18.803580  134509 mustload.go:65] Loading cluster: ha-064080
	I0617 11:04:18.804039  134509 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:04:18.804060  134509 stop.go:39] StopHost: ha-064080-m02
	I0617 11:04:18.804448  134509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:04:18.804503  134509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:04:18.820316  134509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35311
	I0617 11:04:18.820749  134509 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:04:18.821241  134509 main.go:141] libmachine: Using API Version  1
	I0617 11:04:18.821264  134509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:04:18.821641  134509 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:04:18.823967  134509 out.go:177] * Stopping node "ha-064080-m02"  ...
	I0617 11:04:18.825188  134509 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0617 11:04:18.825212  134509 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:04:18.825450  134509 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0617 11:04:18.825490  134509 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:04:18.828106  134509 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:04:18.828552  134509 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:04:18.828581  134509 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:04:18.828716  134509 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:04:18.828917  134509 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:04:18.829151  134509 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:04:18.829325  134509 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa Username:docker}
	I0617 11:04:18.912715  134509 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0617 11:04:18.966004  134509 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0617 11:04:19.023029  134509 main.go:141] libmachine: Stopping "ha-064080-m02"...
	I0617 11:04:19.023073  134509 main.go:141] libmachine: (ha-064080-m02) Calling .GetState
	I0617 11:04:19.024864  134509 main.go:141] libmachine: (ha-064080-m02) Calling .Stop
	I0617 11:04:19.028531  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 0/120
	I0617 11:04:20.030446  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 1/120
	I0617 11:04:21.032142  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 2/120
	I0617 11:04:22.033565  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 3/120
	I0617 11:04:23.034807  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 4/120
	I0617 11:04:24.036670  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 5/120
	I0617 11:04:25.039050  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 6/120
	I0617 11:04:26.040264  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 7/120
	I0617 11:04:27.041523  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 8/120
	I0617 11:04:28.042920  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 9/120
	I0617 11:04:29.044876  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 10/120
	I0617 11:04:30.046275  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 11/120
	I0617 11:04:31.047771  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 12/120
	I0617 11:04:32.049964  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 13/120
	I0617 11:04:33.051247  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 14/120
	I0617 11:04:34.053205  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 15/120
	I0617 11:04:35.054699  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 16/120
	I0617 11:04:36.056043  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 17/120
	I0617 11:04:37.058240  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 18/120
	I0617 11:04:38.060384  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 19/120
	I0617 11:04:39.062489  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 20/120
	I0617 11:04:40.064086  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 21/120
	I0617 11:04:41.065964  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 22/120
	I0617 11:04:42.067234  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 23/120
	I0617 11:04:43.068455  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 24/120
	I0617 11:04:44.070171  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 25/120
	I0617 11:04:45.071592  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 26/120
	I0617 11:04:46.072809  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 27/120
	I0617 11:04:47.074227  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 28/120
	I0617 11:04:48.075597  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 29/120
	I0617 11:04:49.077651  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 30/120
	I0617 11:04:50.079336  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 31/120
	I0617 11:04:51.080525  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 32/120
	I0617 11:04:52.081996  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 33/120
	I0617 11:04:53.083487  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 34/120
	I0617 11:04:54.085499  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 35/120
	I0617 11:04:55.086949  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 36/120
	I0617 11:04:56.088144  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 37/120
	I0617 11:04:57.089511  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 38/120
	I0617 11:04:58.091453  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 39/120
	I0617 11:04:59.093241  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 40/120
	I0617 11:05:00.094396  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 41/120
	I0617 11:05:01.095847  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 42/120
	I0617 11:05:02.097142  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 43/120
	I0617 11:05:03.098525  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 44/120
	I0617 11:05:04.100666  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 45/120
	I0617 11:05:05.102042  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 46/120
	I0617 11:05:06.103286  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 47/120
	I0617 11:05:07.105382  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 48/120
	I0617 11:05:08.106656  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 49/120
	I0617 11:05:09.107956  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 50/120
	I0617 11:05:10.109342  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 51/120
	I0617 11:05:11.111694  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 52/120
	I0617 11:05:12.113343  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 53/120
	I0617 11:05:13.115007  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 54/120
	I0617 11:05:14.116529  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 55/120
	I0617 11:05:15.117901  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 56/120
	I0617 11:05:16.119977  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 57/120
	I0617 11:05:17.121844  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 58/120
	I0617 11:05:18.123443  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 59/120
	I0617 11:05:19.124984  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 60/120
	I0617 11:05:20.126431  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 61/120
	I0617 11:05:21.128513  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 62/120
	I0617 11:05:22.130008  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 63/120
	I0617 11:05:23.131545  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 64/120
	I0617 11:05:24.133604  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 65/120
	I0617 11:05:25.135856  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 66/120
	I0617 11:05:26.137203  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 67/120
	I0617 11:05:27.138738  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 68/120
	I0617 11:05:28.140115  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 69/120
	I0617 11:05:29.142244  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 70/120
	I0617 11:05:30.143365  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 71/120
	I0617 11:05:31.144636  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 72/120
	I0617 11:05:32.145948  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 73/120
	I0617 11:05:33.147991  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 74/120
	I0617 11:05:34.150062  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 75/120
	I0617 11:05:35.151589  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 76/120
	I0617 11:05:36.152909  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 77/120
	I0617 11:05:37.154237  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 78/120
	I0617 11:05:38.155540  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 79/120
	I0617 11:05:39.156883  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 80/120
	I0617 11:05:40.158781  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 81/120
	I0617 11:05:41.160549  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 82/120
	I0617 11:05:42.162325  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 83/120
	I0617 11:05:43.163762  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 84/120
	I0617 11:05:44.165606  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 85/120
	I0617 11:05:45.167062  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 86/120
	I0617 11:05:46.168299  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 87/120
	I0617 11:05:47.169964  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 88/120
	I0617 11:05:48.171275  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 89/120
	I0617 11:05:49.173214  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 90/120
	I0617 11:05:50.174608  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 91/120
	I0617 11:05:51.175829  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 92/120
	I0617 11:05:52.178030  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 93/120
	I0617 11:05:53.179224  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 94/120
	I0617 11:05:54.181146  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 95/120
	I0617 11:05:55.182554  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 96/120
	I0617 11:05:56.184015  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 97/120
	I0617 11:05:57.185246  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 98/120
	I0617 11:05:58.186628  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 99/120
	I0617 11:05:59.188665  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 100/120
	I0617 11:06:00.190084  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 101/120
	I0617 11:06:01.191510  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 102/120
	I0617 11:06:02.192952  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 103/120
	I0617 11:06:03.194480  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 104/120
	I0617 11:06:04.196593  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 105/120
	I0617 11:06:05.197950  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 106/120
	I0617 11:06:06.199163  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 107/120
	I0617 11:06:07.200475  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 108/120
	I0617 11:06:08.201727  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 109/120
	I0617 11:06:09.203875  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 110/120
	I0617 11:06:10.205092  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 111/120
	I0617 11:06:11.206324  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 112/120
	I0617 11:06:12.207637  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 113/120
	I0617 11:06:13.209838  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 114/120
	I0617 11:06:14.211692  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 115/120
	I0617 11:06:15.213952  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 116/120
	I0617 11:06:16.215201  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 117/120
	I0617 11:06:17.216520  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 118/120
	I0617 11:06:18.218147  134509 main.go:141] libmachine: (ha-064080-m02) Waiting for machine to stop 119/120
	I0617 11:06:19.219211  134509 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0617 11:06:19.219366  134509 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-064080 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr: exit status 3 (19.163397702s)

                                                
                                                
-- stdout --
	ha-064080
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-064080-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:06:19.265038  134965 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:06:19.265322  134965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:06:19.265331  134965 out.go:304] Setting ErrFile to fd 2...
	I0617 11:06:19.265336  134965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:06:19.265524  134965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:06:19.265725  134965 out.go:298] Setting JSON to false
	I0617 11:06:19.265752  134965 mustload.go:65] Loading cluster: ha-064080
	I0617 11:06:19.265886  134965 notify.go:220] Checking for updates...
	I0617 11:06:19.266274  134965 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:06:19.266296  134965 status.go:255] checking status of ha-064080 ...
	I0617 11:06:19.266899  134965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:19.266952  134965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:19.284699  134965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35607
	I0617 11:06:19.285104  134965 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:19.285608  134965 main.go:141] libmachine: Using API Version  1
	I0617 11:06:19.285628  134965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:19.286068  134965 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:19.286304  134965 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:06:19.287982  134965 status.go:330] ha-064080 host status = "Running" (err=<nil>)
	I0617 11:06:19.288000  134965 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:06:19.288322  134965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:19.288371  134965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:19.303201  134965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44541
	I0617 11:06:19.303635  134965 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:19.304074  134965 main.go:141] libmachine: Using API Version  1
	I0617 11:06:19.304093  134965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:19.304403  134965 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:19.304580  134965 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:06:19.307387  134965 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:06:19.307846  134965 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:06:19.307877  134965 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:06:19.307993  134965 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:06:19.308341  134965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:19.308386  134965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:19.323658  134965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46219
	I0617 11:06:19.323985  134965 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:19.324385  134965 main.go:141] libmachine: Using API Version  1
	I0617 11:06:19.324406  134965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:19.324673  134965 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:19.324880  134965 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:06:19.325051  134965 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:06:19.325084  134965 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:06:19.327438  134965 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:06:19.327872  134965 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:06:19.327888  134965 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:06:19.328057  134965 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:06:19.328230  134965 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:06:19.328368  134965 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:06:19.328504  134965 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:06:19.413085  134965 ssh_runner.go:195] Run: systemctl --version
	I0617 11:06:19.420362  134965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:06:19.439338  134965 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:06:19.439381  134965 api_server.go:166] Checking apiserver status ...
	I0617 11:06:19.439418  134965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:06:19.456780  134965 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup
	W0617 11:06:19.466747  134965 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:06:19.466788  134965 ssh_runner.go:195] Run: ls
	I0617 11:06:19.471252  134965 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:06:19.475165  134965 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:06:19.475188  134965 status.go:422] ha-064080 apiserver status = Running (err=<nil>)
	I0617 11:06:19.475198  134965 status.go:257] ha-064080 status: &{Name:ha-064080 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:06:19.475223  134965 status.go:255] checking status of ha-064080-m02 ...
	I0617 11:06:19.475526  134965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:19.475566  134965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:19.491673  134965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37303
	I0617 11:06:19.492077  134965 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:19.492577  134965 main.go:141] libmachine: Using API Version  1
	I0617 11:06:19.492596  134965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:19.492982  134965 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:19.493160  134965 main.go:141] libmachine: (ha-064080-m02) Calling .GetState
	I0617 11:06:19.494572  134965 status.go:330] ha-064080-m02 host status = "Running" (err=<nil>)
	I0617 11:06:19.494587  134965 host.go:66] Checking if "ha-064080-m02" exists ...
	I0617 11:06:19.494880  134965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:19.494921  134965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:19.509394  134965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36505
	I0617 11:06:19.509789  134965 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:19.510217  134965 main.go:141] libmachine: Using API Version  1
	I0617 11:06:19.510236  134965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:19.510585  134965 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:19.510782  134965 main.go:141] libmachine: (ha-064080-m02) Calling .GetIP
	I0617 11:06:19.513230  134965 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:06:19.513644  134965 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:06:19.513672  134965 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:06:19.513775  134965 host.go:66] Checking if "ha-064080-m02" exists ...
	I0617 11:06:19.514061  134965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:19.514102  134965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:19.528778  134965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35007
	I0617 11:06:19.529125  134965 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:19.529583  134965 main.go:141] libmachine: Using API Version  1
	I0617 11:06:19.529603  134965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:19.529914  134965 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:19.530120  134965 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:06:19.530282  134965 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:06:19.530300  134965 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:06:19.532821  134965 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:06:19.533220  134965 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:06:19.533247  134965 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:06:19.533379  134965 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:06:19.533526  134965 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:06:19.533664  134965 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:06:19.533839  134965 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa Username:docker}
	W0617 11:06:38.023660  134965 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	W0617 11:06:38.023756  134965 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0617 11:06:38.023770  134965 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0617 11:06:38.023777  134965 status.go:257] ha-064080-m02 status: &{Name:ha-064080-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0617 11:06:38.023796  134965 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0617 11:06:38.023803  134965 status.go:255] checking status of ha-064080-m03 ...
	I0617 11:06:38.024112  134965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:38.024162  134965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:38.038899  134965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40663
	I0617 11:06:38.039382  134965 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:38.039835  134965 main.go:141] libmachine: Using API Version  1
	I0617 11:06:38.039855  134965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:38.040170  134965 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:38.040391  134965 main.go:141] libmachine: (ha-064080-m03) Calling .GetState
	I0617 11:06:38.041869  134965 status.go:330] ha-064080-m03 host status = "Running" (err=<nil>)
	I0617 11:06:38.041889  134965 host.go:66] Checking if "ha-064080-m03" exists ...
	I0617 11:06:38.042180  134965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:38.042231  134965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:38.056282  134965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34415
	I0617 11:06:38.056620  134965 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:38.057116  134965 main.go:141] libmachine: Using API Version  1
	I0617 11:06:38.057142  134965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:38.057452  134965 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:38.057665  134965 main.go:141] libmachine: (ha-064080-m03) Calling .GetIP
	I0617 11:06:38.060134  134965 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:06:38.060480  134965 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:06:38.060500  134965 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:06:38.060657  134965 host.go:66] Checking if "ha-064080-m03" exists ...
	I0617 11:06:38.060942  134965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:38.060974  134965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:38.075509  134965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38389
	I0617 11:06:38.075932  134965 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:38.076380  134965 main.go:141] libmachine: Using API Version  1
	I0617 11:06:38.076403  134965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:38.076707  134965 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:38.076906  134965 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:06:38.077094  134965 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:06:38.077118  134965 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:06:38.080216  134965 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:06:38.080750  134965 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:06:38.080778  134965 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:06:38.080945  134965 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:06:38.081138  134965 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:06:38.081310  134965 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:06:38.081466  134965 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa Username:docker}
	I0617 11:06:38.165386  134965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:06:38.183857  134965 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:06:38.183884  134965 api_server.go:166] Checking apiserver status ...
	I0617 11:06:38.183922  134965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:06:38.198282  134965 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup
	W0617 11:06:38.206993  134965 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:06:38.207038  134965 ssh_runner.go:195] Run: ls
	I0617 11:06:38.211424  134965 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:06:38.215497  134965 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:06:38.215526  134965 status.go:422] ha-064080-m03 apiserver status = Running (err=<nil>)
	I0617 11:06:38.215534  134965 status.go:257] ha-064080-m03 status: &{Name:ha-064080-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:06:38.215548  134965 status.go:255] checking status of ha-064080-m04 ...
	I0617 11:06:38.215830  134965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:38.215870  134965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:38.231838  134965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35317
	I0617 11:06:38.232268  134965 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:38.232684  134965 main.go:141] libmachine: Using API Version  1
	I0617 11:06:38.232703  134965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:38.233013  134965 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:38.233254  134965 main.go:141] libmachine: (ha-064080-m04) Calling .GetState
	I0617 11:06:38.234867  134965 status.go:330] ha-064080-m04 host status = "Running" (err=<nil>)
	I0617 11:06:38.234886  134965 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:06:38.235163  134965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:38.235206  134965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:38.251131  134965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I0617 11:06:38.251559  134965 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:38.252072  134965 main.go:141] libmachine: Using API Version  1
	I0617 11:06:38.252096  134965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:38.252441  134965 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:38.252620  134965 main.go:141] libmachine: (ha-064080-m04) Calling .GetIP
	I0617 11:06:38.255640  134965 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:06:38.256103  134965 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:06:38.256138  134965 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:06:38.256233  134965 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:06:38.256534  134965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:38.256569  134965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:38.272107  134965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41173
	I0617 11:06:38.272474  134965 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:38.272888  134965 main.go:141] libmachine: Using API Version  1
	I0617 11:06:38.272909  134965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:38.273224  134965 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:38.273412  134965 main.go:141] libmachine: (ha-064080-m04) Calling .DriverName
	I0617 11:06:38.273581  134965 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:06:38.273602  134965 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHHostname
	I0617 11:06:38.276132  134965 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:06:38.276495  134965 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:06:38.276524  134965 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:06:38.276675  134965 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHPort
	I0617 11:06:38.276851  134965 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHKeyPath
	I0617 11:06:38.276973  134965 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHUsername
	I0617 11:06:38.277103  134965 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m04/id_rsa Username:docker}
	I0617 11:06:38.365146  134965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:06:38.382373  134965 status.go:257] ha-064080-m04 status: &{Name:ha-064080-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-064080 -n ha-064080
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-064080 logs -n 25: (1.431715834s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-064080 cp ha-064080-m03:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4010822866/001/cp-test_ha-064080-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m03:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080:/home/docker/cp-test_ha-064080-m03_ha-064080.txt                       |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080 sudo cat                                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m03_ha-064080.txt                                 |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m03:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m02:/home/docker/cp-test_ha-064080-m03_ha-064080-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080-m02 sudo cat                                          | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m03_ha-064080-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m03:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04:/home/docker/cp-test_ha-064080-m03_ha-064080-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080-m04 sudo cat                                          | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m03_ha-064080-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-064080 cp testdata/cp-test.txt                                                | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4010822866/001/cp-test_ha-064080-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080:/home/docker/cp-test_ha-064080-m04_ha-064080.txt                       |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080 sudo cat                                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m04_ha-064080.txt                                 |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m02:/home/docker/cp-test_ha-064080-m04_ha-064080-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080-m02 sudo cat                                          | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m04_ha-064080-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m03:/home/docker/cp-test_ha-064080-m04_ha-064080-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080-m03 sudo cat                                          | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m04_ha-064080-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-064080 node stop m02 -v=7                                                     | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 10:59:52
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 10:59:52.528854  130544 out.go:291] Setting OutFile to fd 1 ...
	I0617 10:59:52.529112  130544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 10:59:52.529122  130544 out.go:304] Setting ErrFile to fd 2...
	I0617 10:59:52.529126  130544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 10:59:52.529289  130544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 10:59:52.529863  130544 out.go:298] Setting JSON to false
	I0617 10:59:52.530769  130544 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2540,"bootTime":1718619453,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 10:59:52.530826  130544 start.go:139] virtualization: kvm guest
	I0617 10:59:52.532858  130544 out.go:177] * [ha-064080] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 10:59:52.534259  130544 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 10:59:52.534318  130544 notify.go:220] Checking for updates...
	I0617 10:59:52.535480  130544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 10:59:52.536966  130544 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 10:59:52.538645  130544 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 10:59:52.539950  130544 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 10:59:52.541126  130544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 10:59:52.542395  130544 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 10:59:52.577077  130544 out.go:177] * Using the kvm2 driver based on user configuration
	I0617 10:59:52.578302  130544 start.go:297] selected driver: kvm2
	I0617 10:59:52.578318  130544 start.go:901] validating driver "kvm2" against <nil>
	I0617 10:59:52.578333  130544 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 10:59:52.579044  130544 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 10:59:52.579144  130544 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 10:59:52.595008  130544 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 10:59:52.595079  130544 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 10:59:52.595275  130544 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 10:59:52.595343  130544 cni.go:84] Creating CNI manager for ""
	I0617 10:59:52.595359  130544 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0617 10:59:52.595367  130544 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0617 10:59:52.595447  130544 start.go:340] cluster config:
	{Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0617 10:59:52.595640  130544 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 10:59:52.597416  130544 out.go:177] * Starting "ha-064080" primary control-plane node in "ha-064080" cluster
	I0617 10:59:52.598543  130544 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 10:59:52.598574  130544 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 10:59:52.598584  130544 cache.go:56] Caching tarball of preloaded images
	I0617 10:59:52.598670  130544 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 10:59:52.598684  130544 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 10:59:52.598987  130544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 10:59:52.599010  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json: {Name:mk551857841548380a629a0aa2b54bb72637dca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:59:52.599155  130544 start.go:360] acquireMachinesLock for ha-064080: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 10:59:52.599191  130544 start.go:364] duration metric: took 19.793µs to acquireMachinesLock for "ha-064080"
	I0617 10:59:52.599213  130544 start.go:93] Provisioning new machine with config: &{Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 10:59:52.599267  130544 start.go:125] createHost starting for "" (driver="kvm2")
	I0617 10:59:52.600691  130544 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 10:59:52.600829  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:59:52.600876  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:59:52.614981  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I0617 10:59:52.615398  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:59:52.615959  130544 main.go:141] libmachine: Using API Version  1
	I0617 10:59:52.615985  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:59:52.616334  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:59:52.616509  130544 main.go:141] libmachine: (ha-064080) Calling .GetMachineName
	I0617 10:59:52.616668  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 10:59:52.616808  130544 start.go:159] libmachine.API.Create for "ha-064080" (driver="kvm2")
	I0617 10:59:52.616838  130544 client.go:168] LocalClient.Create starting
	I0617 10:59:52.616870  130544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem
	I0617 10:59:52.616902  130544 main.go:141] libmachine: Decoding PEM data...
	I0617 10:59:52.616917  130544 main.go:141] libmachine: Parsing certificate...
	I0617 10:59:52.616977  130544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem
	I0617 10:59:52.616994  130544 main.go:141] libmachine: Decoding PEM data...
	I0617 10:59:52.617007  130544 main.go:141] libmachine: Parsing certificate...
	I0617 10:59:52.617023  130544 main.go:141] libmachine: Running pre-create checks...
	I0617 10:59:52.617038  130544 main.go:141] libmachine: (ha-064080) Calling .PreCreateCheck
	I0617 10:59:52.617327  130544 main.go:141] libmachine: (ha-064080) Calling .GetConfigRaw
	I0617 10:59:52.617679  130544 main.go:141] libmachine: Creating machine...
	I0617 10:59:52.617692  130544 main.go:141] libmachine: (ha-064080) Calling .Create
	I0617 10:59:52.617804  130544 main.go:141] libmachine: (ha-064080) Creating KVM machine...
	I0617 10:59:52.618960  130544 main.go:141] libmachine: (ha-064080) DBG | found existing default KVM network
	I0617 10:59:52.619681  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:52.619532  130567 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015470}
	I0617 10:59:52.619699  130544 main.go:141] libmachine: (ha-064080) DBG | created network xml: 
	I0617 10:59:52.619708  130544 main.go:141] libmachine: (ha-064080) DBG | <network>
	I0617 10:59:52.619716  130544 main.go:141] libmachine: (ha-064080) DBG |   <name>mk-ha-064080</name>
	I0617 10:59:52.619725  130544 main.go:141] libmachine: (ha-064080) DBG |   <dns enable='no'/>
	I0617 10:59:52.619738  130544 main.go:141] libmachine: (ha-064080) DBG |   
	I0617 10:59:52.619753  130544 main.go:141] libmachine: (ha-064080) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0617 10:59:52.619764  130544 main.go:141] libmachine: (ha-064080) DBG |     <dhcp>
	I0617 10:59:52.619776  130544 main.go:141] libmachine: (ha-064080) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0617 10:59:52.619785  130544 main.go:141] libmachine: (ha-064080) DBG |     </dhcp>
	I0617 10:59:52.619802  130544 main.go:141] libmachine: (ha-064080) DBG |   </ip>
	I0617 10:59:52.619810  130544 main.go:141] libmachine: (ha-064080) DBG |   
	I0617 10:59:52.619823  130544 main.go:141] libmachine: (ha-064080) DBG | </network>
	I0617 10:59:52.619832  130544 main.go:141] libmachine: (ha-064080) DBG | 
	I0617 10:59:52.624606  130544 main.go:141] libmachine: (ha-064080) DBG | trying to create private KVM network mk-ha-064080 192.168.39.0/24...
	I0617 10:59:52.686986  130544 main.go:141] libmachine: (ha-064080) DBG | private KVM network mk-ha-064080 192.168.39.0/24 created
	I0617 10:59:52.687084  130544 main.go:141] libmachine: (ha-064080) Setting up store path in /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080 ...
	I0617 10:59:52.687126  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:52.686940  130567 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 10:59:52.687145  130544 main.go:141] libmachine: (ha-064080) Building disk image from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso
	I0617 10:59:52.687173  130544 main.go:141] libmachine: (ha-064080) Downloading /home/jenkins/minikube-integration/19084-112967/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0617 10:59:52.941345  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:52.941201  130567 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa...
	I0617 10:59:53.118166  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:53.118039  130567 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/ha-064080.rawdisk...
	I0617 10:59:53.118196  130544 main.go:141] libmachine: (ha-064080) DBG | Writing magic tar header
	I0617 10:59:53.118209  130544 main.go:141] libmachine: (ha-064080) DBG | Writing SSH key tar header
	I0617 10:59:53.118217  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:53.118171  130567 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080 ...
	I0617 10:59:53.118297  130544 main.go:141] libmachine: (ha-064080) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080
	I0617 10:59:53.118335  130544 main.go:141] libmachine: (ha-064080) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080 (perms=drwx------)
	I0617 10:59:53.118347  130544 main.go:141] libmachine: (ha-064080) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines
	I0617 10:59:53.118356  130544 main.go:141] libmachine: (ha-064080) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 10:59:53.118381  130544 main.go:141] libmachine: (ha-064080) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967
	I0617 10:59:53.118392  130544 main.go:141] libmachine: (ha-064080) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines (perms=drwxr-xr-x)
	I0617 10:59:53.118405  130544 main.go:141] libmachine: (ha-064080) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube (perms=drwxr-xr-x)
	I0617 10:59:53.118416  130544 main.go:141] libmachine: (ha-064080) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0617 10:59:53.118427  130544 main.go:141] libmachine: (ha-064080) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967 (perms=drwxrwxr-x)
	I0617 10:59:53.118441  130544 main.go:141] libmachine: (ha-064080) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0617 10:59:53.118450  130544 main.go:141] libmachine: (ha-064080) DBG | Checking permissions on dir: /home/jenkins
	I0617 10:59:53.118455  130544 main.go:141] libmachine: (ha-064080) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0617 10:59:53.118465  130544 main.go:141] libmachine: (ha-064080) Creating domain...
	I0617 10:59:53.118474  130544 main.go:141] libmachine: (ha-064080) DBG | Checking permissions on dir: /home
	I0617 10:59:53.118497  130544 main.go:141] libmachine: (ha-064080) DBG | Skipping /home - not owner
	I0617 10:59:53.119533  130544 main.go:141] libmachine: (ha-064080) define libvirt domain using xml: 
	I0617 10:59:53.119557  130544 main.go:141] libmachine: (ha-064080) <domain type='kvm'>
	I0617 10:59:53.119564  130544 main.go:141] libmachine: (ha-064080)   <name>ha-064080</name>
	I0617 10:59:53.119569  130544 main.go:141] libmachine: (ha-064080)   <memory unit='MiB'>2200</memory>
	I0617 10:59:53.119574  130544 main.go:141] libmachine: (ha-064080)   <vcpu>2</vcpu>
	I0617 10:59:53.119584  130544 main.go:141] libmachine: (ha-064080)   <features>
	I0617 10:59:53.119614  130544 main.go:141] libmachine: (ha-064080)     <acpi/>
	I0617 10:59:53.119638  130544 main.go:141] libmachine: (ha-064080)     <apic/>
	I0617 10:59:53.119646  130544 main.go:141] libmachine: (ha-064080)     <pae/>
	I0617 10:59:53.119658  130544 main.go:141] libmachine: (ha-064080)     
	I0617 10:59:53.119664  130544 main.go:141] libmachine: (ha-064080)   </features>
	I0617 10:59:53.119673  130544 main.go:141] libmachine: (ha-064080)   <cpu mode='host-passthrough'>
	I0617 10:59:53.119680  130544 main.go:141] libmachine: (ha-064080)   
	I0617 10:59:53.119691  130544 main.go:141] libmachine: (ha-064080)   </cpu>
	I0617 10:59:53.119699  130544 main.go:141] libmachine: (ha-064080)   <os>
	I0617 10:59:53.119711  130544 main.go:141] libmachine: (ha-064080)     <type>hvm</type>
	I0617 10:59:53.119733  130544 main.go:141] libmachine: (ha-064080)     <boot dev='cdrom'/>
	I0617 10:59:53.119750  130544 main.go:141] libmachine: (ha-064080)     <boot dev='hd'/>
	I0617 10:59:53.119756  130544 main.go:141] libmachine: (ha-064080)     <bootmenu enable='no'/>
	I0617 10:59:53.119760  130544 main.go:141] libmachine: (ha-064080)   </os>
	I0617 10:59:53.119766  130544 main.go:141] libmachine: (ha-064080)   <devices>
	I0617 10:59:53.119772  130544 main.go:141] libmachine: (ha-064080)     <disk type='file' device='cdrom'>
	I0617 10:59:53.119780  130544 main.go:141] libmachine: (ha-064080)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/boot2docker.iso'/>
	I0617 10:59:53.119787  130544 main.go:141] libmachine: (ha-064080)       <target dev='hdc' bus='scsi'/>
	I0617 10:59:53.119793  130544 main.go:141] libmachine: (ha-064080)       <readonly/>
	I0617 10:59:53.119797  130544 main.go:141] libmachine: (ha-064080)     </disk>
	I0617 10:59:53.119803  130544 main.go:141] libmachine: (ha-064080)     <disk type='file' device='disk'>
	I0617 10:59:53.119809  130544 main.go:141] libmachine: (ha-064080)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0617 10:59:53.119820  130544 main.go:141] libmachine: (ha-064080)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/ha-064080.rawdisk'/>
	I0617 10:59:53.119825  130544 main.go:141] libmachine: (ha-064080)       <target dev='hda' bus='virtio'/>
	I0617 10:59:53.119839  130544 main.go:141] libmachine: (ha-064080)     </disk>
	I0617 10:59:53.119846  130544 main.go:141] libmachine: (ha-064080)     <interface type='network'>
	I0617 10:59:53.119852  130544 main.go:141] libmachine: (ha-064080)       <source network='mk-ha-064080'/>
	I0617 10:59:53.119862  130544 main.go:141] libmachine: (ha-064080)       <model type='virtio'/>
	I0617 10:59:53.119888  130544 main.go:141] libmachine: (ha-064080)     </interface>
	I0617 10:59:53.119907  130544 main.go:141] libmachine: (ha-064080)     <interface type='network'>
	I0617 10:59:53.119926  130544 main.go:141] libmachine: (ha-064080)       <source network='default'/>
	I0617 10:59:53.119937  130544 main.go:141] libmachine: (ha-064080)       <model type='virtio'/>
	I0617 10:59:53.119948  130544 main.go:141] libmachine: (ha-064080)     </interface>
	I0617 10:59:53.119955  130544 main.go:141] libmachine: (ha-064080)     <serial type='pty'>
	I0617 10:59:53.119966  130544 main.go:141] libmachine: (ha-064080)       <target port='0'/>
	I0617 10:59:53.119974  130544 main.go:141] libmachine: (ha-064080)     </serial>
	I0617 10:59:53.119981  130544 main.go:141] libmachine: (ha-064080)     <console type='pty'>
	I0617 10:59:53.119995  130544 main.go:141] libmachine: (ha-064080)       <target type='serial' port='0'/>
	I0617 10:59:53.120007  130544 main.go:141] libmachine: (ha-064080)     </console>
	I0617 10:59:53.120014  130544 main.go:141] libmachine: (ha-064080)     <rng model='virtio'>
	I0617 10:59:53.120027  130544 main.go:141] libmachine: (ha-064080)       <backend model='random'>/dev/random</backend>
	I0617 10:59:53.120034  130544 main.go:141] libmachine: (ha-064080)     </rng>
	I0617 10:59:53.120043  130544 main.go:141] libmachine: (ha-064080)     
	I0617 10:59:53.120052  130544 main.go:141] libmachine: (ha-064080)     
	I0617 10:59:53.120060  130544 main.go:141] libmachine: (ha-064080)   </devices>
	I0617 10:59:53.120076  130544 main.go:141] libmachine: (ha-064080) </domain>
	I0617 10:59:53.120089  130544 main.go:141] libmachine: (ha-064080) 
	I0617 10:59:53.124410  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:78:87:13 in network default
	I0617 10:59:53.125015  130544 main.go:141] libmachine: (ha-064080) Ensuring networks are active...
	I0617 10:59:53.125038  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:53.125653  130544 main.go:141] libmachine: (ha-064080) Ensuring network default is active
	I0617 10:59:53.125926  130544 main.go:141] libmachine: (ha-064080) Ensuring network mk-ha-064080 is active
	I0617 10:59:53.126492  130544 main.go:141] libmachine: (ha-064080) Getting domain xml...
	I0617 10:59:53.127260  130544 main.go:141] libmachine: (ha-064080) Creating domain...
	I0617 10:59:54.293658  130544 main.go:141] libmachine: (ha-064080) Waiting to get IP...
	I0617 10:59:54.294561  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:54.294974  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 10:59:54.294999  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:54.294944  130567 retry.go:31] will retry after 218.859983ms: waiting for machine to come up
	I0617 10:59:54.515338  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:54.515862  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 10:59:54.515889  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:54.515829  130567 retry.go:31] will retry after 357.850554ms: waiting for machine to come up
	I0617 10:59:54.875426  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:54.875890  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 10:59:54.875913  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:54.875847  130567 retry.go:31] will retry after 313.568669ms: waiting for machine to come up
	I0617 10:59:55.191438  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:55.191919  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 10:59:55.191943  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:55.191873  130567 retry.go:31] will retry after 580.32994ms: waiting for machine to come up
	I0617 10:59:55.773570  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:55.774015  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 10:59:55.774040  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:55.773980  130567 retry.go:31] will retry after 642.58108ms: waiting for machine to come up
	I0617 10:59:56.417740  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:56.418140  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 10:59:56.418161  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:56.418094  130567 retry.go:31] will retry after 951.787863ms: waiting for machine to come up
	I0617 10:59:57.371206  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:57.371638  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 10:59:57.371682  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:57.371577  130567 retry.go:31] will retry after 1.042883837s: waiting for machine to come up
	I0617 10:59:58.416292  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:58.416658  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 10:59:58.416682  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:58.416625  130567 retry.go:31] will retry after 1.181180972s: waiting for machine to come up
	I0617 10:59:59.599938  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:59.600398  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 10:59:59.600428  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:59.600344  130567 retry.go:31] will retry after 1.538902549s: waiting for machine to come up
	I0617 11:00:01.141116  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:01.141638  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 11:00:01.141659  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 11:00:01.141589  130567 retry.go:31] will retry after 2.04090153s: waiting for machine to come up
	I0617 11:00:03.183660  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:03.184074  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 11:00:03.184096  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 11:00:03.184026  130567 retry.go:31] will retry after 2.563650396s: waiting for machine to come up
	I0617 11:00:05.748935  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:05.749403  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 11:00:05.749448  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 11:00:05.749353  130567 retry.go:31] will retry after 2.769265978s: waiting for machine to come up
	I0617 11:00:08.519638  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:08.520051  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 11:00:08.520089  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 11:00:08.520014  130567 retry.go:31] will retry after 4.435386884s: waiting for machine to come up
	I0617 11:00:12.957378  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:12.957863  130544 main.go:141] libmachine: (ha-064080) Found IP for machine: 192.168.39.134
	I0617 11:00:12.957887  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has current primary IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:12.957912  130544 main.go:141] libmachine: (ha-064080) Reserving static IP address...
	I0617 11:00:12.958238  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find host DHCP lease matching {name: "ha-064080", mac: "52:54:00:bd:48:a9", ip: "192.168.39.134"} in network mk-ha-064080
	I0617 11:00:13.031149  130544 main.go:141] libmachine: (ha-064080) DBG | Getting to WaitForSSH function...
	I0617 11:00:13.031179  130544 main.go:141] libmachine: (ha-064080) Reserved static IP address: 192.168.39.134
	I0617 11:00:13.031191  130544 main.go:141] libmachine: (ha-064080) Waiting for SSH to be available...
	I0617 11:00:13.033670  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.034026  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:13.034054  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.034177  130544 main.go:141] libmachine: (ha-064080) DBG | Using SSH client type: external
	I0617 11:00:13.034206  130544 main.go:141] libmachine: (ha-064080) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa (-rw-------)
	I0617 11:00:13.034270  130544 main.go:141] libmachine: (ha-064080) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 11:00:13.034297  130544 main.go:141] libmachine: (ha-064080) DBG | About to run SSH command:
	I0617 11:00:13.034311  130544 main.go:141] libmachine: (ha-064080) DBG | exit 0
	I0617 11:00:13.155280  130544 main.go:141] libmachine: (ha-064080) DBG | SSH cmd err, output: <nil>: 
	I0617 11:00:13.155555  130544 main.go:141] libmachine: (ha-064080) KVM machine creation complete!
	I0617 11:00:13.155862  130544 main.go:141] libmachine: (ha-064080) Calling .GetConfigRaw
	I0617 11:00:13.156474  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:00:13.156701  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:00:13.156880  130544 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0617 11:00:13.156893  130544 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:00:13.158051  130544 main.go:141] libmachine: Detecting operating system of created instance...
	I0617 11:00:13.158065  130544 main.go:141] libmachine: Waiting for SSH to be available...
	I0617 11:00:13.158076  130544 main.go:141] libmachine: Getting to WaitForSSH function...
	I0617 11:00:13.158085  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:13.160281  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.160597  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:13.160619  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.160798  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:13.160981  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.161151  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.161309  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:13.161453  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:00:13.161673  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:00:13.161688  130544 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0617 11:00:13.258476  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:00:13.258500  130544 main.go:141] libmachine: Detecting the provisioner...
	I0617 11:00:13.258512  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:13.261174  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.261524  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:13.261548  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.261726  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:13.261928  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.262094  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.262253  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:13.262477  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:00:13.262664  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:00:13.262678  130544 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0617 11:00:13.359893  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0617 11:00:13.359971  130544 main.go:141] libmachine: found compatible host: buildroot
	I0617 11:00:13.359984  130544 main.go:141] libmachine: Provisioning with buildroot...
	I0617 11:00:13.359995  130544 main.go:141] libmachine: (ha-064080) Calling .GetMachineName
	I0617 11:00:13.360286  130544 buildroot.go:166] provisioning hostname "ha-064080"
	I0617 11:00:13.360311  130544 main.go:141] libmachine: (ha-064080) Calling .GetMachineName
	I0617 11:00:13.360509  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:13.363230  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.363608  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:13.363634  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.363765  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:13.363963  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.364125  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.364285  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:13.364430  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:00:13.364623  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:00:13.364642  130544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-064080 && echo "ha-064080" | sudo tee /etc/hostname
	I0617 11:00:13.473612  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-064080
	
	I0617 11:00:13.473642  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:13.476514  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.476832  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:13.476860  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.476984  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:13.477179  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.477339  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.477476  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:13.477665  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:00:13.477894  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:00:13.477922  130544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-064080' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-064080/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-064080' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 11:00:13.583510  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:00:13.583542  130544 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 11:00:13.583567  130544 buildroot.go:174] setting up certificates
	I0617 11:00:13.583582  130544 provision.go:84] configureAuth start
	I0617 11:00:13.583594  130544 main.go:141] libmachine: (ha-064080) Calling .GetMachineName
	I0617 11:00:13.583925  130544 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:00:13.586430  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.586719  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:13.586751  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.586923  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:13.589132  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.589469  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:13.589492  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.589588  130544 provision.go:143] copyHostCerts
	I0617 11:00:13.589618  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:00:13.589667  130544 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 11:00:13.589679  130544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:00:13.589754  130544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 11:00:13.589865  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:00:13.589901  130544 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 11:00:13.589909  130544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:00:13.589952  130544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 11:00:13.590013  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:00:13.590037  130544 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 11:00:13.590046  130544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:00:13.590079  130544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 11:00:13.590148  130544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.ha-064080 san=[127.0.0.1 192.168.39.134 ha-064080 localhost minikube]
	I0617 11:00:13.791780  130544 provision.go:177] copyRemoteCerts
	I0617 11:00:13.791852  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 11:00:13.791882  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:13.794250  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.794696  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:13.794727  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.794936  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:13.795138  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.795286  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:13.795412  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:00:13.873208  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0617 11:00:13.873276  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 11:00:13.896893  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0617 11:00:13.896955  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0617 11:00:13.919537  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0617 11:00:13.919597  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 11:00:13.946128  130544 provision.go:87] duration metric: took 362.536623ms to configureAuth
	I0617 11:00:13.946155  130544 buildroot.go:189] setting minikube options for container-runtime
	I0617 11:00:13.946339  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:00:13.946431  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:13.949013  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.949339  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:13.949375  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.949562  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:13.949769  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.949944  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.950096  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:13.950258  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:00:13.950456  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:00:13.950478  130544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 11:00:14.198305  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 11:00:14.198334  130544 main.go:141] libmachine: Checking connection to Docker...
	I0617 11:00:14.198346  130544 main.go:141] libmachine: (ha-064080) Calling .GetURL
	I0617 11:00:14.199776  130544 main.go:141] libmachine: (ha-064080) DBG | Using libvirt version 6000000
	I0617 11:00:14.202002  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.202321  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:14.202350  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.202499  130544 main.go:141] libmachine: Docker is up and running!
	I0617 11:00:14.202520  130544 main.go:141] libmachine: Reticulating splines...
	I0617 11:00:14.202528  130544 client.go:171] duration metric: took 21.585680233s to LocalClient.Create
	I0617 11:00:14.202554  130544 start.go:167] duration metric: took 21.58574405s to libmachine.API.Create "ha-064080"
	I0617 11:00:14.202568  130544 start.go:293] postStartSetup for "ha-064080" (driver="kvm2")
	I0617 11:00:14.202584  130544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 11:00:14.202605  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:00:14.202851  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 11:00:14.202880  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:14.204727  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.204994  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:14.205030  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.205109  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:14.205278  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:14.205465  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:14.205627  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:00:14.285511  130544 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 11:00:14.289676  130544 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 11:00:14.289694  130544 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 11:00:14.289743  130544 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 11:00:14.289821  130544 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 11:00:14.289839  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /etc/ssl/certs/1201742.pem
	I0617 11:00:14.289934  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 11:00:14.299105  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:00:14.323422  130544 start.go:296] duration metric: took 120.839325ms for postStartSetup
	I0617 11:00:14.323506  130544 main.go:141] libmachine: (ha-064080) Calling .GetConfigRaw
	I0617 11:00:14.324016  130544 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:00:14.326609  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.326944  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:14.326977  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.327221  130544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 11:00:14.327420  130544 start.go:128] duration metric: took 21.728142334s to createHost
	I0617 11:00:14.327478  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:14.329348  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.329643  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:14.329668  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.329772  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:14.329953  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:14.330100  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:14.330220  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:14.330338  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:00:14.330519  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:00:14.330530  130544 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 11:00:14.427834  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718622014.396175350
	
	I0617 11:00:14.427857  130544 fix.go:216] guest clock: 1718622014.396175350
	I0617 11:00:14.427866  130544 fix.go:229] Guest: 2024-06-17 11:00:14.39617535 +0000 UTC Remote: 2024-06-17 11:00:14.327433545 +0000 UTC m=+21.834352548 (delta=68.741805ms)
	I0617 11:00:14.427907  130544 fix.go:200] guest clock delta is within tolerance: 68.741805ms
	I0617 11:00:14.427914  130544 start.go:83] releasing machines lock for "ha-064080", held for 21.828711146s
	I0617 11:00:14.427937  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:00:14.428182  130544 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:00:14.430657  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.431015  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:14.431041  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.431234  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:00:14.431678  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:00:14.431853  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:00:14.431931  130544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 11:00:14.431982  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:14.432038  130544 ssh_runner.go:195] Run: cat /version.json
	I0617 11:00:14.432054  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:14.434678  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.434705  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.435070  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:14.435090  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:14.435107  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.435179  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.435287  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:14.435428  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:14.435501  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:14.435622  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:14.435665  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:14.435707  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:14.435792  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:00:14.435852  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:00:14.536390  130544 ssh_runner.go:195] Run: systemctl --version
	I0617 11:00:14.542037  130544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 11:00:14.701371  130544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 11:00:14.707793  130544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 11:00:14.707860  130544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 11:00:14.724194  130544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 11:00:14.724218  130544 start.go:494] detecting cgroup driver to use...
	I0617 11:00:14.724283  130544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 11:00:14.740006  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 11:00:14.753023  130544 docker.go:217] disabling cri-docker service (if available) ...
	I0617 11:00:14.753081  130544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 11:00:14.766015  130544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 11:00:14.779269  130544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 11:00:14.889108  130544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 11:00:15.049159  130544 docker.go:233] disabling docker service ...
	I0617 11:00:15.049238  130544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 11:00:15.063310  130544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 11:00:15.076558  130544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 11:00:15.201162  130544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 11:00:15.327111  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 11:00:15.340358  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 11:00:15.357998  130544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 11:00:15.358058  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:00:15.367989  130544 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 11:00:15.368042  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:00:15.378663  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:00:15.388920  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:00:15.399252  130544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 11:00:15.409785  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:00:15.420214  130544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:00:15.436592  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:00:15.446616  130544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 11:00:15.455878  130544 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 11:00:15.455928  130544 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 11:00:15.469051  130544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 11:00:15.478622  130544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:00:15.604029  130544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 11:00:15.733069  130544 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 11:00:15.733146  130544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 11:00:15.737701  130544 start.go:562] Will wait 60s for crictl version
	I0617 11:00:15.737744  130544 ssh_runner.go:195] Run: which crictl
	I0617 11:00:15.741699  130544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 11:00:15.786509  130544 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 11:00:15.786584  130544 ssh_runner.go:195] Run: crio --version
	I0617 11:00:15.815294  130544 ssh_runner.go:195] Run: crio --version
	I0617 11:00:15.846062  130544 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 11:00:15.847285  130544 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:00:15.850123  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:15.850409  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:15.850435  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:15.850703  130544 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 11:00:15.854686  130544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:00:15.867493  130544 kubeadm.go:877] updating cluster {Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 11:00:15.867611  130544 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:00:15.867674  130544 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:00:15.900267  130544 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 11:00:15.900336  130544 ssh_runner.go:195] Run: which lz4
	I0617 11:00:15.904085  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0617 11:00:15.904172  130544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 11:00:15.908327  130544 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 11:00:15.908351  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0617 11:00:17.280633  130544 crio.go:462] duration metric: took 1.376487827s to copy over tarball
	I0617 11:00:17.280705  130544 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 11:00:19.342719  130544 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.061986213s)
	I0617 11:00:19.342745  130544 crio.go:469] duration metric: took 2.062082096s to extract the tarball
	I0617 11:00:19.342754  130544 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 11:00:19.380180  130544 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:00:19.427157  130544 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 11:00:19.427183  130544 cache_images.go:84] Images are preloaded, skipping loading
	I0617 11:00:19.427191  130544 kubeadm.go:928] updating node { 192.168.39.134 8443 v1.30.1 crio true true} ...
	I0617 11:00:19.427309  130544 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-064080 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 11:00:19.427377  130544 ssh_runner.go:195] Run: crio config
	I0617 11:00:19.474982  130544 cni.go:84] Creating CNI manager for ""
	I0617 11:00:19.475005  130544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0617 11:00:19.475013  130544 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 11:00:19.475037  130544 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.134 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-064080 NodeName:ha-064080 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 11:00:19.475169  130544 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-064080"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 11:00:19.475194  130544 kube-vip.go:115] generating kube-vip config ...
	I0617 11:00:19.475236  130544 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0617 11:00:19.492093  130544 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0617 11:00:19.492199  130544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0617 11:00:19.492255  130544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 11:00:19.502761  130544 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 11:00:19.502844  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0617 11:00:19.512878  130544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0617 11:00:19.529967  130544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 11:00:19.547192  130544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0617 11:00:19.564583  130544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0617 11:00:19.581601  130544 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0617 11:00:19.585576  130544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:00:19.598182  130544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:00:19.718357  130544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:00:19.736665  130544 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080 for IP: 192.168.39.134
	I0617 11:00:19.736690  130544 certs.go:194] generating shared ca certs ...
	I0617 11:00:19.736705  130544 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:00:19.736861  130544 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 11:00:19.736897  130544 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 11:00:19.736904  130544 certs.go:256] generating profile certs ...
	I0617 11:00:19.736966  130544 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.key
	I0617 11:00:19.736980  130544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.crt with IP's: []
	I0617 11:00:19.798369  130544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.crt ...
	I0617 11:00:19.798400  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.crt: {Name:mk750201a7aa370c01c81c107eedf9ca2c411f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:00:19.798599  130544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.key ...
	I0617 11:00:19.798616  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.key: {Name:mk0346023acf5db2af27e34311b2764dba2a9d75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:00:19.798704  130544 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.0218256d
	I0617 11:00:19.798723  130544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.0218256d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.134 192.168.39.254]
	I0617 11:00:19.945551  130544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.0218256d ...
	I0617 11:00:19.945585  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.0218256d: {Name:mk42b1a801de6f8d9ad4890f002b4c0a7613c512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:00:19.945744  130544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.0218256d ...
	I0617 11:00:19.945757  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.0218256d: {Name:mk3721bc4c71e5ca11dd9b219e77fb6f8b99982c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:00:19.945830  130544 certs.go:381] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.0218256d -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt
	I0617 11:00:19.945922  130544 certs.go:385] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.0218256d -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key
	I0617 11:00:19.945979  130544 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key
	I0617 11:00:19.945998  130544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt with IP's: []
	I0617 11:00:20.081905  130544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt ...
	I0617 11:00:20.081937  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt: {Name:mk9b37d6c9d0db0266803d48f0885ded54b27bd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:00:20.082105  130544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key ...
	I0617 11:00:20.082116  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key: {Name:mkaf3993839ea939fd426b9486305c8a43e19b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:00:20.082192  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0617 11:00:20.082208  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0617 11:00:20.082219  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0617 11:00:20.082231  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0617 11:00:20.082244  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0617 11:00:20.082253  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0617 11:00:20.082265  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0617 11:00:20.082276  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0617 11:00:20.082323  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 11:00:20.082360  130544 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 11:00:20.082373  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 11:00:20.082399  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 11:00:20.082421  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 11:00:20.082443  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 11:00:20.082478  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:00:20.082503  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /usr/share/ca-certificates/1201742.pem
	I0617 11:00:20.082516  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:00:20.082528  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem -> /usr/share/ca-certificates/120174.pem
	I0617 11:00:20.083080  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 11:00:20.109804  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 11:00:20.134161  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 11:00:20.158133  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 11:00:20.182146  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0617 11:00:20.205947  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 11:00:20.230427  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 11:00:20.253949  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 11:00:20.278117  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 11:00:20.302374  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 11:00:20.325844  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 11:00:20.349600  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 11:00:20.366226  130544 ssh_runner.go:195] Run: openssl version
	I0617 11:00:20.371980  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 11:00:20.382844  130544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 11:00:20.387478  130544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 11:00:20.387532  130544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 11:00:20.393404  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 11:00:20.404325  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 11:00:20.415333  130544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:00:20.420040  130544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:00:20.420098  130544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:00:20.425813  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 11:00:20.436916  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 11:00:20.447945  130544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 11:00:20.452480  130544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 11:00:20.452534  130544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 11:00:20.458293  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 11:00:20.469064  130544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 11:00:20.473497  130544 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0617 11:00:20.473553  130544 kubeadm.go:391] StartCluster: {Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:00:20.473627  130544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 11:00:20.473689  130544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 11:00:20.516425  130544 cri.go:89] found id: ""
	I0617 11:00:20.516492  130544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0617 11:00:20.530239  130544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 11:00:20.549972  130544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 11:00:20.561360  130544 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 11:00:20.561377  130544 kubeadm.go:156] found existing configuration files:
	
	I0617 11:00:20.561424  130544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 11:00:20.571038  130544 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 11:00:20.571105  130544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 11:00:20.580915  130544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 11:00:20.590177  130544 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 11:00:20.590256  130544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 11:00:20.603031  130544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 11:00:20.612822  130544 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 11:00:20.612883  130544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 11:00:20.622808  130544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 11:00:20.632378  130544 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 11:00:20.632452  130544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 11:00:20.642439  130544 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 11:00:20.877231  130544 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 11:00:32.558734  130544 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0617 11:00:32.558836  130544 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 11:00:32.558965  130544 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 11:00:32.559112  130544 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 11:00:32.559261  130544 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 11:00:32.559359  130544 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 11:00:32.560814  130544 out.go:204]   - Generating certificates and keys ...
	I0617 11:00:32.560880  130544 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 11:00:32.560938  130544 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 11:00:32.560997  130544 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0617 11:00:32.561048  130544 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0617 11:00:32.561154  130544 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0617 11:00:32.561208  130544 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0617 11:00:32.561265  130544 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0617 11:00:32.561379  130544 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-064080 localhost] and IPs [192.168.39.134 127.0.0.1 ::1]
	I0617 11:00:32.561432  130544 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0617 11:00:32.561587  130544 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-064080 localhost] and IPs [192.168.39.134 127.0.0.1 ::1]
	I0617 11:00:32.561680  130544 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0617 11:00:32.561779  130544 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0617 11:00:32.561833  130544 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0617 11:00:32.561881  130544 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 11:00:32.561928  130544 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 11:00:32.561976  130544 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0617 11:00:32.562021  130544 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 11:00:32.562081  130544 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 11:00:32.562157  130544 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 11:00:32.562229  130544 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 11:00:32.562285  130544 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 11:00:32.563439  130544 out.go:204]   - Booting up control plane ...
	I0617 11:00:32.563569  130544 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 11:00:32.563676  130544 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 11:00:32.563778  130544 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 11:00:32.563901  130544 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 11:00:32.564034  130544 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 11:00:32.564097  130544 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 11:00:32.564252  130544 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0617 11:00:32.564347  130544 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0617 11:00:32.564430  130544 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.527316ms
	I0617 11:00:32.564528  130544 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0617 11:00:32.564619  130544 kubeadm.go:309] [api-check] The API server is healthy after 6.122928863s
	I0617 11:00:32.564755  130544 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0617 11:00:32.564911  130544 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0617 11:00:32.564993  130544 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0617 11:00:32.565173  130544 kubeadm.go:309] [mark-control-plane] Marking the node ha-064080 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0617 11:00:32.565252  130544 kubeadm.go:309] [bootstrap-token] Using token: wxs5l2.6ag2rr3bbqveig7f
	I0617 11:00:32.566527  130544 out.go:204]   - Configuring RBAC rules ...
	I0617 11:00:32.566637  130544 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0617 11:00:32.566731  130544 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0617 11:00:32.566881  130544 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0617 11:00:32.567052  130544 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0617 11:00:32.567181  130544 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0617 11:00:32.567288  130544 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0617 11:00:32.567433  130544 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0617 11:00:32.567511  130544 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0617 11:00:32.567588  130544 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0617 11:00:32.567603  130544 kubeadm.go:309] 
	I0617 11:00:32.567684  130544 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0617 11:00:32.567694  130544 kubeadm.go:309] 
	I0617 11:00:32.567827  130544 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0617 11:00:32.567844  130544 kubeadm.go:309] 
	I0617 11:00:32.567892  130544 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0617 11:00:32.567980  130544 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0617 11:00:32.568059  130544 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0617 11:00:32.568068  130544 kubeadm.go:309] 
	I0617 11:00:32.568142  130544 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0617 11:00:32.568152  130544 kubeadm.go:309] 
	I0617 11:00:32.568217  130544 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0617 11:00:32.568226  130544 kubeadm.go:309] 
	I0617 11:00:32.568309  130544 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0617 11:00:32.568406  130544 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0617 11:00:32.568495  130544 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0617 11:00:32.568502  130544 kubeadm.go:309] 
	I0617 11:00:32.568610  130544 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0617 11:00:32.568680  130544 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0617 11:00:32.568689  130544 kubeadm.go:309] 
	I0617 11:00:32.568761  130544 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token wxs5l2.6ag2rr3bbqveig7f \
	I0617 11:00:32.568857  130544 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 \
	I0617 11:00:32.568876  130544 kubeadm.go:309] 	--control-plane 
	I0617 11:00:32.568882  130544 kubeadm.go:309] 
	I0617 11:00:32.568949  130544 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0617 11:00:32.568955  130544 kubeadm.go:309] 
	I0617 11:00:32.569021  130544 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token wxs5l2.6ag2rr3bbqveig7f \
	I0617 11:00:32.569121  130544 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 
	I0617 11:00:32.569141  130544 cni.go:84] Creating CNI manager for ""
	I0617 11:00:32.569148  130544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0617 11:00:32.570410  130544 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0617 11:00:32.571516  130544 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0617 11:00:32.577463  130544 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0617 11:00:32.577481  130544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0617 11:00:32.597035  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0617 11:00:32.980805  130544 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 11:00:32.980891  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:32.980934  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-064080 minikube.k8s.io/updated_at=2024_06_17T11_00_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6 minikube.k8s.io/name=ha-064080 minikube.k8s.io/primary=true
	I0617 11:00:33.110457  130544 ops.go:34] apiserver oom_adj: -16
	I0617 11:00:33.125594  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:33.626142  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:34.125677  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:34.626603  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:35.126092  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:35.626262  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:36.126436  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:36.626174  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:37.125982  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:37.626388  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:38.125755  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:38.626674  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:39.126486  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:39.625651  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:40.126254  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:40.626057  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:41.126189  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:41.625956  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:42.126148  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:42.626217  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:43.125903  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:43.626135  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:44.126473  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:44.626010  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:44.729152  130544 kubeadm.go:1107] duration metric: took 11.748330999s to wait for elevateKubeSystemPrivileges
	W0617 11:00:44.729203  130544 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0617 11:00:44.729215  130544 kubeadm.go:393] duration metric: took 24.255665076s to StartCluster
	I0617 11:00:44.729238  130544 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:00:44.729318  130544 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:00:44.730242  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:00:44.730435  130544 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:00:44.730458  130544 start.go:240] waiting for startup goroutines ...
	I0617 11:00:44.730459  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0617 11:00:44.730473  130544 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 11:00:44.730532  130544 addons.go:69] Setting storage-provisioner=true in profile "ha-064080"
	I0617 11:00:44.730569  130544 addons.go:234] Setting addon storage-provisioner=true in "ha-064080"
	I0617 11:00:44.730577  130544 addons.go:69] Setting default-storageclass=true in profile "ha-064080"
	I0617 11:00:44.730603  130544 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:00:44.730623  130544 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-064080"
	I0617 11:00:44.730666  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:00:44.730968  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:00:44.730991  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:00:44.731010  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:00:44.731013  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:00:44.745761  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0617 11:00:44.745763  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43521
	I0617 11:00:44.746265  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:00:44.746323  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:00:44.746903  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:00:44.746934  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:00:44.747036  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:00:44.747065  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:00:44.747282  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:00:44.747490  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:00:44.747658  130544 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:00:44.747881  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:00:44.747933  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:00:44.749785  130544 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:00:44.750048  130544 kapi.go:59] client config for ha-064080: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.crt", KeyFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.key", CAFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfaf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0617 11:00:44.750526  130544 cert_rotation.go:137] Starting client certificate rotation controller
	I0617 11:00:44.750725  130544 addons.go:234] Setting addon default-storageclass=true in "ha-064080"
	I0617 11:00:44.750761  130544 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:00:44.751035  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:00:44.751075  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:00:44.762874  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46015
	I0617 11:00:44.763321  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:00:44.763884  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:00:44.763909  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:00:44.764325  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:00:44.764743  130544 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:00:44.765008  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39529
	I0617 11:00:44.765558  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:00:44.766117  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:00:44.766145  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:00:44.766417  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:00:44.766555  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:00:44.768596  130544 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 11:00:44.767060  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:00:44.769781  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:00:44.769861  130544 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 11:00:44.769880  130544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 11:00:44.769894  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:44.772698  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:44.773096  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:44.773123  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:44.773374  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:44.773564  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:44.773751  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:44.773927  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:00:44.784884  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I0617 11:00:44.785277  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:00:44.785703  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:00:44.785729  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:00:44.786106  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:00:44.786273  130544 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:00:44.787624  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:00:44.787831  130544 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 11:00:44.787848  130544 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 11:00:44.787868  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:44.790558  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:44.790960  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:44.790986  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:44.791159  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:44.791348  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:44.791524  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:44.791670  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:00:44.852414  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0617 11:00:44.911145  130544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 11:00:44.926112  130544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 11:00:45.306029  130544 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0617 11:00:45.306123  130544 main.go:141] libmachine: Making call to close driver server
	I0617 11:00:45.306143  130544 main.go:141] libmachine: (ha-064080) Calling .Close
	I0617 11:00:45.306413  130544 main.go:141] libmachine: Successfully made call to close driver server
	I0617 11:00:45.306432  130544 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 11:00:45.306448  130544 main.go:141] libmachine: Making call to close driver server
	I0617 11:00:45.306457  130544 main.go:141] libmachine: (ha-064080) Calling .Close
	I0617 11:00:45.306659  130544 main.go:141] libmachine: Successfully made call to close driver server
	I0617 11:00:45.306674  130544 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 11:00:45.306694  130544 main.go:141] libmachine: (ha-064080) DBG | Closing plugin on server side
	I0617 11:00:45.306819  130544 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0617 11:00:45.306835  130544 round_trippers.go:469] Request Headers:
	I0617 11:00:45.306847  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:00:45.306852  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:00:45.318232  130544 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0617 11:00:45.318811  130544 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0617 11:00:45.318824  130544 round_trippers.go:469] Request Headers:
	I0617 11:00:45.318832  130544 round_trippers.go:473]     Content-Type: application/json
	I0617 11:00:45.318836  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:00:45.318839  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:00:45.331975  130544 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0617 11:00:45.332472  130544 main.go:141] libmachine: Making call to close driver server
	I0617 11:00:45.332486  130544 main.go:141] libmachine: (ha-064080) Calling .Close
	I0617 11:00:45.332782  130544 main.go:141] libmachine: Successfully made call to close driver server
	I0617 11:00:45.332800  130544 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 11:00:45.332820  130544 main.go:141] libmachine: (ha-064080) DBG | Closing plugin on server side
	I0617 11:00:45.502038  130544 main.go:141] libmachine: Making call to close driver server
	I0617 11:00:45.502069  130544 main.go:141] libmachine: (ha-064080) Calling .Close
	I0617 11:00:45.502384  130544 main.go:141] libmachine: Successfully made call to close driver server
	I0617 11:00:45.502403  130544 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 11:00:45.502418  130544 main.go:141] libmachine: Making call to close driver server
	I0617 11:00:45.502426  130544 main.go:141] libmachine: (ha-064080) Calling .Close
	I0617 11:00:45.502692  130544 main.go:141] libmachine: Successfully made call to close driver server
	I0617 11:00:45.502708  130544 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 11:00:45.502727  130544 main.go:141] libmachine: (ha-064080) DBG | Closing plugin on server side
	I0617 11:00:45.505543  130544 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0617 11:00:45.506915  130544 addons.go:510] duration metric: took 776.432264ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0617 11:00:45.506950  130544 start.go:245] waiting for cluster config update ...
	I0617 11:00:45.506963  130544 start.go:254] writing updated cluster config ...
	I0617 11:00:45.508748  130544 out.go:177] 
	I0617 11:00:45.510038  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:00:45.510124  130544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 11:00:45.511639  130544 out.go:177] * Starting "ha-064080-m02" control-plane node in "ha-064080" cluster
	I0617 11:00:45.512603  130544 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:00:45.512621  130544 cache.go:56] Caching tarball of preloaded images
	I0617 11:00:45.512721  130544 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:00:45.512733  130544 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 11:00:45.512807  130544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 11:00:45.512966  130544 start.go:360] acquireMachinesLock for ha-064080-m02: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:00:45.513006  130544 start.go:364] duration metric: took 21.895µs to acquireMachinesLock for "ha-064080-m02"
	I0617 11:00:45.513028  130544 start.go:93] Provisioning new machine with config: &{Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:00:45.513122  130544 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0617 11:00:45.514311  130544 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 11:00:45.514388  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:00:45.514413  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:00:45.529044  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34831
	I0617 11:00:45.529542  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:00:45.530105  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:00:45.530141  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:00:45.530419  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:00:45.530594  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetMachineName
	I0617 11:00:45.530776  130544 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:00:45.530933  130544 start.go:159] libmachine.API.Create for "ha-064080" (driver="kvm2")
	I0617 11:00:45.530960  130544 client.go:168] LocalClient.Create starting
	I0617 11:00:45.531001  130544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem
	I0617 11:00:45.531059  130544 main.go:141] libmachine: Decoding PEM data...
	I0617 11:00:45.531090  130544 main.go:141] libmachine: Parsing certificate...
	I0617 11:00:45.531165  130544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem
	I0617 11:00:45.531193  130544 main.go:141] libmachine: Decoding PEM data...
	I0617 11:00:45.531207  130544 main.go:141] libmachine: Parsing certificate...
	I0617 11:00:45.531241  130544 main.go:141] libmachine: Running pre-create checks...
	I0617 11:00:45.531253  130544 main.go:141] libmachine: (ha-064080-m02) Calling .PreCreateCheck
	I0617 11:00:45.531411  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetConfigRaw
	I0617 11:00:45.531837  130544 main.go:141] libmachine: Creating machine...
	I0617 11:00:45.531855  130544 main.go:141] libmachine: (ha-064080-m02) Calling .Create
	I0617 11:00:45.531970  130544 main.go:141] libmachine: (ha-064080-m02) Creating KVM machine...
	I0617 11:00:45.533206  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found existing default KVM network
	I0617 11:00:45.533346  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found existing private KVM network mk-ha-064080
	I0617 11:00:45.533505  130544 main.go:141] libmachine: (ha-064080-m02) Setting up store path in /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02 ...
	I0617 11:00:45.533530  130544 main.go:141] libmachine: (ha-064080-m02) Building disk image from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso
	I0617 11:00:45.533630  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:45.533507  130923 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:00:45.533700  130544 main.go:141] libmachine: (ha-064080-m02) Downloading /home/jenkins/minikube-integration/19084-112967/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0617 11:00:45.779663  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:45.779433  130923 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa...
	I0617 11:00:46.125617  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:46.125475  130923 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/ha-064080-m02.rawdisk...
	I0617 11:00:46.125654  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Writing magic tar header
	I0617 11:00:46.125669  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Writing SSH key tar header
	I0617 11:00:46.125682  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:46.125589  130923 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02 ...
	I0617 11:00:46.125703  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02
	I0617 11:00:46.125776  130544 main.go:141] libmachine: (ha-064080-m02) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02 (perms=drwx------)
	I0617 11:00:46.125794  130544 main.go:141] libmachine: (ha-064080-m02) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines (perms=drwxr-xr-x)
	I0617 11:00:46.125815  130544 main.go:141] libmachine: (ha-064080-m02) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube (perms=drwxr-xr-x)
	I0617 11:00:46.125827  130544 main.go:141] libmachine: (ha-064080-m02) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967 (perms=drwxrwxr-x)
	I0617 11:00:46.125839  130544 main.go:141] libmachine: (ha-064080-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0617 11:00:46.125848  130544 main.go:141] libmachine: (ha-064080-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0617 11:00:46.125855  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines
	I0617 11:00:46.125865  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:00:46.125871  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967
	I0617 11:00:46.125877  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0617 11:00:46.125883  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Checking permissions on dir: /home/jenkins
	I0617 11:00:46.125888  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Checking permissions on dir: /home
	I0617 11:00:46.125895  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Skipping /home - not owner
	I0617 11:00:46.125905  130544 main.go:141] libmachine: (ha-064080-m02) Creating domain...
	I0617 11:00:46.126831  130544 main.go:141] libmachine: (ha-064080-m02) define libvirt domain using xml: 
	I0617 11:00:46.126856  130544 main.go:141] libmachine: (ha-064080-m02) <domain type='kvm'>
	I0617 11:00:46.126866  130544 main.go:141] libmachine: (ha-064080-m02)   <name>ha-064080-m02</name>
	I0617 11:00:46.126879  130544 main.go:141] libmachine: (ha-064080-m02)   <memory unit='MiB'>2200</memory>
	I0617 11:00:46.126886  130544 main.go:141] libmachine: (ha-064080-m02)   <vcpu>2</vcpu>
	I0617 11:00:46.126898  130544 main.go:141] libmachine: (ha-064080-m02)   <features>
	I0617 11:00:46.126920  130544 main.go:141] libmachine: (ha-064080-m02)     <acpi/>
	I0617 11:00:46.126931  130544 main.go:141] libmachine: (ha-064080-m02)     <apic/>
	I0617 11:00:46.126939  130544 main.go:141] libmachine: (ha-064080-m02)     <pae/>
	I0617 11:00:46.126946  130544 main.go:141] libmachine: (ha-064080-m02)     
	I0617 11:00:46.126955  130544 main.go:141] libmachine: (ha-064080-m02)   </features>
	I0617 11:00:46.126961  130544 main.go:141] libmachine: (ha-064080-m02)   <cpu mode='host-passthrough'>
	I0617 11:00:46.126973  130544 main.go:141] libmachine: (ha-064080-m02)   
	I0617 11:00:46.126983  130544 main.go:141] libmachine: (ha-064080-m02)   </cpu>
	I0617 11:00:46.126992  130544 main.go:141] libmachine: (ha-064080-m02)   <os>
	I0617 11:00:46.127003  130544 main.go:141] libmachine: (ha-064080-m02)     <type>hvm</type>
	I0617 11:00:46.127012  130544 main.go:141] libmachine: (ha-064080-m02)     <boot dev='cdrom'/>
	I0617 11:00:46.127026  130544 main.go:141] libmachine: (ha-064080-m02)     <boot dev='hd'/>
	I0617 11:00:46.127038  130544 main.go:141] libmachine: (ha-064080-m02)     <bootmenu enable='no'/>
	I0617 11:00:46.127047  130544 main.go:141] libmachine: (ha-064080-m02)   </os>
	I0617 11:00:46.127054  130544 main.go:141] libmachine: (ha-064080-m02)   <devices>
	I0617 11:00:46.127059  130544 main.go:141] libmachine: (ha-064080-m02)     <disk type='file' device='cdrom'>
	I0617 11:00:46.127071  130544 main.go:141] libmachine: (ha-064080-m02)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/boot2docker.iso'/>
	I0617 11:00:46.127083  130544 main.go:141] libmachine: (ha-064080-m02)       <target dev='hdc' bus='scsi'/>
	I0617 11:00:46.127093  130544 main.go:141] libmachine: (ha-064080-m02)       <readonly/>
	I0617 11:00:46.127107  130544 main.go:141] libmachine: (ha-064080-m02)     </disk>
	I0617 11:00:46.127136  130544 main.go:141] libmachine: (ha-064080-m02)     <disk type='file' device='disk'>
	I0617 11:00:46.127163  130544 main.go:141] libmachine: (ha-064080-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0617 11:00:46.127193  130544 main.go:141] libmachine: (ha-064080-m02)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/ha-064080-m02.rawdisk'/>
	I0617 11:00:46.127214  130544 main.go:141] libmachine: (ha-064080-m02)       <target dev='hda' bus='virtio'/>
	I0617 11:00:46.127227  130544 main.go:141] libmachine: (ha-064080-m02)     </disk>
	I0617 11:00:46.127238  130544 main.go:141] libmachine: (ha-064080-m02)     <interface type='network'>
	I0617 11:00:46.127251  130544 main.go:141] libmachine: (ha-064080-m02)       <source network='mk-ha-064080'/>
	I0617 11:00:46.127259  130544 main.go:141] libmachine: (ha-064080-m02)       <model type='virtio'/>
	I0617 11:00:46.127270  130544 main.go:141] libmachine: (ha-064080-m02)     </interface>
	I0617 11:00:46.127281  130544 main.go:141] libmachine: (ha-064080-m02)     <interface type='network'>
	I0617 11:00:46.127296  130544 main.go:141] libmachine: (ha-064080-m02)       <source network='default'/>
	I0617 11:00:46.127309  130544 main.go:141] libmachine: (ha-064080-m02)       <model type='virtio'/>
	I0617 11:00:46.127321  130544 main.go:141] libmachine: (ha-064080-m02)     </interface>
	I0617 11:00:46.127331  130544 main.go:141] libmachine: (ha-064080-m02)     <serial type='pty'>
	I0617 11:00:46.127340  130544 main.go:141] libmachine: (ha-064080-m02)       <target port='0'/>
	I0617 11:00:46.127350  130544 main.go:141] libmachine: (ha-064080-m02)     </serial>
	I0617 11:00:46.127366  130544 main.go:141] libmachine: (ha-064080-m02)     <console type='pty'>
	I0617 11:00:46.127381  130544 main.go:141] libmachine: (ha-064080-m02)       <target type='serial' port='0'/>
	I0617 11:00:46.127393  130544 main.go:141] libmachine: (ha-064080-m02)     </console>
	I0617 11:00:46.127403  130544 main.go:141] libmachine: (ha-064080-m02)     <rng model='virtio'>
	I0617 11:00:46.127416  130544 main.go:141] libmachine: (ha-064080-m02)       <backend model='random'>/dev/random</backend>
	I0617 11:00:46.127425  130544 main.go:141] libmachine: (ha-064080-m02)     </rng>
	I0617 11:00:46.127433  130544 main.go:141] libmachine: (ha-064080-m02)     
	I0617 11:00:46.127436  130544 main.go:141] libmachine: (ha-064080-m02)     
	I0617 11:00:46.127441  130544 main.go:141] libmachine: (ha-064080-m02)   </devices>
	I0617 11:00:46.127446  130544 main.go:141] libmachine: (ha-064080-m02) </domain>
	I0617 11:00:46.127475  130544 main.go:141] libmachine: (ha-064080-m02) 
	I0617 11:00:46.134010  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:9a:bd:a4 in network default
	I0617 11:00:46.134480  130544 main.go:141] libmachine: (ha-064080-m02) Ensuring networks are active...
	I0617 11:00:46.134504  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:46.135144  130544 main.go:141] libmachine: (ha-064080-m02) Ensuring network default is active
	I0617 11:00:46.135468  130544 main.go:141] libmachine: (ha-064080-m02) Ensuring network mk-ha-064080 is active
	I0617 11:00:46.135833  130544 main.go:141] libmachine: (ha-064080-m02) Getting domain xml...
	I0617 11:00:46.136486  130544 main.go:141] libmachine: (ha-064080-m02) Creating domain...
	I0617 11:00:47.343471  130544 main.go:141] libmachine: (ha-064080-m02) Waiting to get IP...
	I0617 11:00:47.344979  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:47.345425  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:47.345457  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:47.345408  130923 retry.go:31] will retry after 211.785298ms: waiting for machine to come up
	I0617 11:00:47.559080  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:47.559629  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:47.559680  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:47.559594  130923 retry.go:31] will retry after 332.900963ms: waiting for machine to come up
	I0617 11:00:47.894147  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:47.894585  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:47.894612  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:47.894541  130923 retry.go:31] will retry after 315.785832ms: waiting for machine to come up
	I0617 11:00:48.212185  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:48.212649  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:48.212680  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:48.212600  130923 retry.go:31] will retry after 544.793078ms: waiting for machine to come up
	I0617 11:00:48.759569  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:48.760109  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:48.760162  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:48.760072  130923 retry.go:31] will retry after 602.98657ms: waiting for machine to come up
	I0617 11:00:49.365213  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:49.365714  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:49.365748  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:49.365660  130923 retry.go:31] will retry after 709.551079ms: waiting for machine to come up
	I0617 11:00:50.076458  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:50.076926  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:50.076954  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:50.076887  130923 retry.go:31] will retry after 830.396763ms: waiting for machine to come up
	I0617 11:00:50.909275  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:50.909649  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:50.909682  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:50.909593  130923 retry.go:31] will retry after 1.135405761s: waiting for machine to come up
	I0617 11:00:52.046935  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:52.047270  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:52.047309  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:52.047221  130923 retry.go:31] will retry after 1.708159376s: waiting for machine to come up
	I0617 11:00:53.757441  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:53.757884  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:53.757908  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:53.757833  130923 retry.go:31] will retry after 1.480812383s: waiting for machine to come up
	I0617 11:00:55.240499  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:55.240972  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:55.241002  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:55.240947  130923 retry.go:31] will retry after 2.538066125s: waiting for machine to come up
	I0617 11:00:57.781065  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:57.781429  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:57.781456  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:57.781378  130923 retry.go:31] will retry after 2.954010378s: waiting for machine to come up
	I0617 11:01:00.736714  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:00.737128  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:01:00.737150  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:01:00.737100  130923 retry.go:31] will retry after 4.208220574s: waiting for machine to come up
	I0617 11:01:04.950383  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:04.950775  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:01:04.950800  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:01:04.950741  130923 retry.go:31] will retry after 3.676530568s: waiting for machine to come up
	I0617 11:01:08.628596  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:08.629128  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has current primary IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:08.629167  130544 main.go:141] libmachine: (ha-064080-m02) Found IP for machine: 192.168.39.104
	I0617 11:01:08.629184  130544 main.go:141] libmachine: (ha-064080-m02) Reserving static IP address...
	I0617 11:01:08.629493  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find host DHCP lease matching {name: "ha-064080-m02", mac: "52:54:00:75:79:30", ip: "192.168.39.104"} in network mk-ha-064080
	I0617 11:01:08.699534  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Getting to WaitForSSH function...
	I0617 11:01:08.699569  130544 main.go:141] libmachine: (ha-064080-m02) Reserved static IP address: 192.168.39.104
	I0617 11:01:08.699589  130544 main.go:141] libmachine: (ha-064080-m02) Waiting for SSH to be available...
	I0617 11:01:08.702286  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:08.702662  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080
	I0617 11:01:08.702693  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find defined IP address of network mk-ha-064080 interface with MAC address 52:54:00:75:79:30
	I0617 11:01:08.702838  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Using SSH client type: external
	I0617 11:01:08.702869  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa (-rw-------)
	I0617 11:01:08.702903  130544 main.go:141] libmachine: (ha-064080-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 11:01:08.702915  130544 main.go:141] libmachine: (ha-064080-m02) DBG | About to run SSH command:
	I0617 11:01:08.702930  130544 main.go:141] libmachine: (ha-064080-m02) DBG | exit 0
	I0617 11:01:08.706245  130544 main.go:141] libmachine: (ha-064080-m02) DBG | SSH cmd err, output: exit status 255: 
	I0617 11:01:08.706268  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0617 11:01:08.706295  130544 main.go:141] libmachine: (ha-064080-m02) DBG | command : exit 0
	I0617 11:01:08.706305  130544 main.go:141] libmachine: (ha-064080-m02) DBG | err     : exit status 255
	I0617 11:01:08.706312  130544 main.go:141] libmachine: (ha-064080-m02) DBG | output  : 
	I0617 11:01:11.707159  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Getting to WaitForSSH function...
	I0617 11:01:11.709544  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:11.710037  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:11.710057  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:11.710174  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Using SSH client type: external
	I0617 11:01:11.710200  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa (-rw-------)
	I0617 11:01:11.710231  130544 main.go:141] libmachine: (ha-064080-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 11:01:11.710240  130544 main.go:141] libmachine: (ha-064080-m02) DBG | About to run SSH command:
	I0617 11:01:11.710248  130544 main.go:141] libmachine: (ha-064080-m02) DBG | exit 0
	I0617 11:01:11.831344  130544 main.go:141] libmachine: (ha-064080-m02) DBG | SSH cmd err, output: <nil>: 
	I0617 11:01:11.831592  130544 main.go:141] libmachine: (ha-064080-m02) KVM machine creation complete!
	I0617 11:01:11.831974  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetConfigRaw
	I0617 11:01:11.832615  130544 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:01:11.832818  130544 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:01:11.832975  130544 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0617 11:01:11.833027  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetState
	I0617 11:01:11.834431  130544 main.go:141] libmachine: Detecting operating system of created instance...
	I0617 11:01:11.834446  130544 main.go:141] libmachine: Waiting for SSH to be available...
	I0617 11:01:11.834452  130544 main.go:141] libmachine: Getting to WaitForSSH function...
	I0617 11:01:11.834459  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:11.836666  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:11.837128  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:11.837161  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:11.837305  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:11.837476  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:11.837635  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:11.837782  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:11.837945  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:01:11.838206  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0617 11:01:11.838221  130544 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0617 11:01:11.934443  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:01:11.934468  130544 main.go:141] libmachine: Detecting the provisioner...
	I0617 11:01:11.934476  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:11.937070  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:11.937416  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:11.937450  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:11.937604  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:11.937825  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:11.937985  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:11.938153  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:11.938344  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:01:11.938538  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0617 11:01:11.938554  130544 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0617 11:01:12.035996  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0617 11:01:12.036087  130544 main.go:141] libmachine: found compatible host: buildroot
	I0617 11:01:12.036098  130544 main.go:141] libmachine: Provisioning with buildroot...
	I0617 11:01:12.036106  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetMachineName
	I0617 11:01:12.036331  130544 buildroot.go:166] provisioning hostname "ha-064080-m02"
	I0617 11:01:12.036356  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetMachineName
	I0617 11:01:12.036541  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:12.039490  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.039870  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.039901  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.040009  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:12.040196  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.040368  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.040518  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:12.040743  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:01:12.040962  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0617 11:01:12.040982  130544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-064080-m02 && echo "ha-064080-m02" | sudo tee /etc/hostname
	I0617 11:01:12.153747  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-064080-m02
	
	I0617 11:01:12.153775  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:12.156496  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.156918  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.156944  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.157239  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:12.157452  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.157642  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.157882  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:12.158112  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:01:12.158294  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0617 11:01:12.158311  130544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-064080-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-064080-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-064080-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 11:01:12.264728  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:01:12.264762  130544 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 11:01:12.264793  130544 buildroot.go:174] setting up certificates
	I0617 11:01:12.264811  130544 provision.go:84] configureAuth start
	I0617 11:01:12.264833  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetMachineName
	I0617 11:01:12.265115  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetIP
	I0617 11:01:12.267534  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.267922  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.267950  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.268113  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:12.270296  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.270630  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.270653  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.270793  130544 provision.go:143] copyHostCerts
	I0617 11:01:12.270835  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:01:12.270871  130544 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 11:01:12.270882  130544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:01:12.270959  130544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 11:01:12.271062  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:01:12.271095  130544 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 11:01:12.271105  130544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:01:12.271143  130544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 11:01:12.271198  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:01:12.271222  130544 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 11:01:12.271231  130544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:01:12.271263  130544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 11:01:12.271322  130544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.ha-064080-m02 san=[127.0.0.1 192.168.39.104 ha-064080-m02 localhost minikube]
	I0617 11:01:12.322631  130544 provision.go:177] copyRemoteCerts
	I0617 11:01:12.322699  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 11:01:12.322736  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:12.325071  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.325369  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.325399  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.325596  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:12.325795  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.325976  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:12.326145  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa Username:docker}
	I0617 11:01:12.405165  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0617 11:01:12.405239  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 11:01:12.429072  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0617 11:01:12.429134  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0617 11:01:12.451851  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0617 11:01:12.451902  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 11:01:12.474910  130544 provision.go:87] duration metric: took 210.080891ms to configureAuth
	I0617 11:01:12.474942  130544 buildroot.go:189] setting minikube options for container-runtime
	I0617 11:01:12.475119  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:01:12.475196  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:12.477636  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.477975  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.478001  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.478149  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:12.478369  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.478588  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.478723  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:12.478876  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:01:12.479095  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0617 11:01:12.479110  130544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 11:01:12.742926  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 11:01:12.742957  130544 main.go:141] libmachine: Checking connection to Docker...
	I0617 11:01:12.742967  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetURL
	I0617 11:01:12.744212  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Using libvirt version 6000000
	I0617 11:01:12.746412  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.746780  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.746815  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.746961  130544 main.go:141] libmachine: Docker is up and running!
	I0617 11:01:12.746979  130544 main.go:141] libmachine: Reticulating splines...
	I0617 11:01:12.746988  130544 client.go:171] duration metric: took 27.216016787s to LocalClient.Create
	I0617 11:01:12.747011  130544 start.go:167] duration metric: took 27.216080027s to libmachine.API.Create "ha-064080"
	I0617 11:01:12.747021  130544 start.go:293] postStartSetup for "ha-064080-m02" (driver="kvm2")
	I0617 11:01:12.747030  130544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 11:01:12.747046  130544 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:01:12.747315  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 11:01:12.747356  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:12.749500  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.749828  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.749865  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.750010  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:12.750229  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.750391  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:12.750540  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa Username:docker}
	I0617 11:01:12.829197  130544 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 11:01:12.833560  130544 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 11:01:12.833587  130544 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 11:01:12.833660  130544 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 11:01:12.833751  130544 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 11:01:12.833763  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /etc/ssl/certs/1201742.pem
	I0617 11:01:12.833875  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 11:01:12.843269  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:01:12.868058  130544 start.go:296] duration metric: took 121.020777ms for postStartSetup
	I0617 11:01:12.868105  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetConfigRaw
	I0617 11:01:12.868660  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetIP
	I0617 11:01:12.871026  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.871346  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.871377  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.871613  130544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 11:01:12.871834  130544 start.go:128] duration metric: took 27.358698337s to createHost
	I0617 11:01:12.871858  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:12.874078  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.874413  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.874442  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.874559  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:12.874738  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.874886  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.875003  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:12.875149  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:01:12.875350  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0617 11:01:12.875365  130544 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 11:01:12.972337  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718622072.949961819
	
	I0617 11:01:12.972369  130544 fix.go:216] guest clock: 1718622072.949961819
	I0617 11:01:12.972379  130544 fix.go:229] Guest: 2024-06-17 11:01:12.949961819 +0000 UTC Remote: 2024-06-17 11:01:12.87184639 +0000 UTC m=+80.378765384 (delta=78.115429ms)
	I0617 11:01:12.972400  130544 fix.go:200] guest clock delta is within tolerance: 78.115429ms
	I0617 11:01:12.972406  130544 start.go:83] releasing machines lock for "ha-064080-m02", held for 27.459392322s
	I0617 11:01:12.972423  130544 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:01:12.972680  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetIP
	I0617 11:01:12.975076  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.975450  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.975503  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.977711  130544 out.go:177] * Found network options:
	I0617 11:01:12.979057  130544 out.go:177]   - NO_PROXY=192.168.39.134
	W0617 11:01:12.980315  130544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0617 11:01:12.980343  130544 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:01:12.980861  130544 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:01:12.981050  130544 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:01:12.981146  130544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 11:01:12.981198  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	W0617 11:01:12.981277  130544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0617 11:01:12.981383  130544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 11:01:12.981415  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:12.983937  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.984269  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.984294  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.984311  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.984426  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:12.984594  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.984727  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.984751  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.984761  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:12.984883  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:12.984965  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa Username:docker}
	I0617 11:01:12.985043  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.985168  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:12.985318  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa Username:docker}
	I0617 11:01:13.210941  130544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 11:01:13.217160  130544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 11:01:13.217221  130544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 11:01:13.236585  130544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 11:01:13.236606  130544 start.go:494] detecting cgroup driver to use...
	I0617 11:01:13.236663  130544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 11:01:13.255187  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 11:01:13.268562  130544 docker.go:217] disabling cri-docker service (if available) ...
	I0617 11:01:13.268610  130544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 11:01:13.281859  130544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 11:01:13.297344  130544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 11:01:13.427601  130544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 11:01:13.595295  130544 docker.go:233] disabling docker service ...
	I0617 11:01:13.595378  130544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 11:01:13.610594  130544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 11:01:13.624171  130544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 11:01:13.751731  130544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 11:01:13.868028  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 11:01:13.881912  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 11:01:13.900611  130544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 11:01:13.900662  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:01:13.910803  130544 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 11:01:13.910876  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:01:13.921361  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:01:13.931484  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:01:13.941687  130544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 11:01:13.952131  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:01:13.962186  130544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:01:13.978500  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:01:13.989346  130544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 11:01:13.999104  130544 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 11:01:13.999158  130544 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 11:01:14.013279  130544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 11:01:14.022992  130544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:01:14.132078  130544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 11:01:14.265716  130544 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 11:01:14.265803  130544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 11:01:14.270597  130544 start.go:562] Will wait 60s for crictl version
	I0617 11:01:14.270646  130544 ssh_runner.go:195] Run: which crictl
	I0617 11:01:14.274526  130544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 11:01:14.313924  130544 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 11:01:14.313999  130544 ssh_runner.go:195] Run: crio --version
	I0617 11:01:14.340661  130544 ssh_runner.go:195] Run: crio --version
	I0617 11:01:14.372490  130544 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 11:01:14.373933  130544 out.go:177]   - env NO_PROXY=192.168.39.134
	I0617 11:01:14.375027  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetIP
	I0617 11:01:14.377530  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:14.377863  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:14.377888  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:14.378138  130544 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 11:01:14.382101  130544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:01:14.393964  130544 mustload.go:65] Loading cluster: ha-064080
	I0617 11:01:14.394151  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:01:14.394480  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:01:14.394526  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:01:14.408962  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38843
	I0617 11:01:14.409353  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:01:14.409807  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:01:14.409829  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:01:14.410147  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:01:14.410332  130544 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:01:14.411927  130544 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:01:14.412241  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:01:14.412285  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:01:14.426252  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40467
	I0617 11:01:14.426591  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:01:14.427054  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:01:14.427078  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:01:14.427356  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:01:14.427573  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:01:14.427718  130544 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080 for IP: 192.168.39.104
	I0617 11:01:14.427729  130544 certs.go:194] generating shared ca certs ...
	I0617 11:01:14.427741  130544 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:01:14.427901  130544 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 11:01:14.427963  130544 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 11:01:14.427977  130544 certs.go:256] generating profile certs ...
	I0617 11:01:14.428078  130544 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.key
	I0617 11:01:14.428104  130544 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.18341ce7
	I0617 11:01:14.428118  130544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.18341ce7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.134 192.168.39.104 192.168.39.254]
	I0617 11:01:14.526426  130544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.18341ce7 ...
	I0617 11:01:14.526455  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.18341ce7: {Name:mk3de114e69e7b0d34c18a1c37ebb9ee23768745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:01:14.526638  130544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.18341ce7 ...
	I0617 11:01:14.526655  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.18341ce7: {Name:mk8ac56e4ffc8e71aee80985cf9f1ec72c32422f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:01:14.526748  130544 certs.go:381] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.18341ce7 -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt
	I0617 11:01:14.526913  130544 certs.go:385] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.18341ce7 -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key
	I0617 11:01:14.527103  130544 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key
	I0617 11:01:14.527122  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0617 11:01:14.527138  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0617 11:01:14.527163  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0617 11:01:14.527181  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0617 11:01:14.527201  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0617 11:01:14.527214  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0617 11:01:14.527233  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0617 11:01:14.527250  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0617 11:01:14.527315  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 11:01:14.527356  130544 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 11:01:14.527370  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 11:01:14.527520  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 11:01:14.527588  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 11:01:14.527624  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 11:01:14.527689  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:01:14.527737  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:01:14.527758  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem -> /usr/share/ca-certificates/120174.pem
	I0617 11:01:14.527774  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /usr/share/ca-certificates/1201742.pem
	I0617 11:01:14.527815  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:01:14.530883  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:01:14.531342  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:01:14.531362  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:01:14.531548  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:01:14.531742  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:01:14.531888  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:01:14.532007  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:01:14.603746  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0617 11:01:14.609215  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0617 11:01:14.620177  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0617 11:01:14.624790  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0617 11:01:14.634384  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0617 11:01:14.638567  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0617 11:01:14.648091  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0617 11:01:14.651955  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0617 11:01:14.661478  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0617 11:01:14.665404  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0617 11:01:14.674676  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0617 11:01:14.683789  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0617 11:01:14.695560  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 11:01:14.722416  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 11:01:14.745647  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 11:01:14.771449  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 11:01:14.794474  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0617 11:01:14.817946  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0617 11:01:14.840589  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 11:01:14.863739  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 11:01:14.886316  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 11:01:14.910114  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 11:01:14.932744  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 11:01:14.955402  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0617 11:01:14.971761  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0617 11:01:14.989783  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0617 11:01:15.005936  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0617 11:01:15.022929  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0617 11:01:15.039950  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0617 11:01:15.058374  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0617 11:01:15.075468  130544 ssh_runner.go:195] Run: openssl version
	I0617 11:01:15.081047  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 11:01:15.091249  130544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:01:15.095674  130544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:01:15.095716  130544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:01:15.101361  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 11:01:15.111367  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 11:01:15.122010  130544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 11:01:15.126600  130544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 11:01:15.126649  130544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 11:01:15.132424  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 11:01:15.142882  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 11:01:15.153571  130544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 11:01:15.158084  130544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 11:01:15.158119  130544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 11:01:15.163773  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 11:01:15.174238  130544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 11:01:15.178293  130544 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0617 11:01:15.178370  130544 kubeadm.go:928] updating node {m02 192.168.39.104 8443 v1.30.1 crio true true} ...
	I0617 11:01:15.178466  130544 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-064080-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 11:01:15.178493  130544 kube-vip.go:115] generating kube-vip config ...
	I0617 11:01:15.178528  130544 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0617 11:01:15.193481  130544 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0617 11:01:15.193532  130544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0617 11:01:15.193576  130544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 11:01:15.204861  130544 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0617 11:01:15.204904  130544 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0617 11:01:15.216091  130544 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0617 11:01:15.216108  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0617 11:01:15.216188  130544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0617 11:01:15.216218  130544 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0617 11:01:15.216218  130544 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0617 11:01:15.220541  130544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0617 11:01:15.220564  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0617 11:01:15.757925  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0617 11:01:15.758016  130544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0617 11:01:15.763146  130544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0617 11:01:15.763184  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0617 11:01:21.006629  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:01:21.021935  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0617 11:01:21.022060  130544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0617 11:01:21.026450  130544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0617 11:01:21.026496  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0617 11:01:21.430117  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0617 11:01:21.439567  130544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0617 11:01:21.456842  130544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 11:01:21.473424  130544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0617 11:01:21.490334  130544 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0617 11:01:21.494244  130544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:01:21.506424  130544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:01:21.635119  130544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:01:21.651914  130544 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:01:21.652339  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:01:21.652381  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:01:21.668322  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43219
	I0617 11:01:21.668790  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:01:21.669282  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:01:21.669306  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:01:21.669672  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:01:21.669891  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:01:21.670051  130544 start.go:316] joinCluster: &{Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:01:21.670149  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0617 11:01:21.670173  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:01:21.672980  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:01:21.673341  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:01:21.673371  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:01:21.673535  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:01:21.673723  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:01:21.673910  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:01:21.674067  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:01:21.833488  130544 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:01:21.833555  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qrrepk.1y0r7o63mtidua42 --discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-064080-m02 --control-plane --apiserver-advertise-address=192.168.39.104 --apiserver-bind-port=8443"
	I0617 11:01:44.891838  130544 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qrrepk.1y0r7o63mtidua42 --discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-064080-m02 --control-plane --apiserver-advertise-address=192.168.39.104 --apiserver-bind-port=8443": (23.058250637s)
	I0617 11:01:44.891872  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0617 11:01:45.378267  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-064080-m02 minikube.k8s.io/updated_at=2024_06_17T11_01_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6 minikube.k8s.io/name=ha-064080 minikube.k8s.io/primary=false
	I0617 11:01:45.495107  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-064080-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0617 11:01:45.642232  130544 start.go:318] duration metric: took 23.972173706s to joinCluster
	I0617 11:01:45.642385  130544 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:01:45.643999  130544 out.go:177] * Verifying Kubernetes components...
	I0617 11:01:45.642586  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:01:45.645299  130544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:01:45.881564  130544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:01:45.948904  130544 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:01:45.949173  130544 kapi.go:59] client config for ha-064080: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.crt", KeyFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.key", CAFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfaf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0617 11:01:45.949252  130544 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.134:8443
	I0617 11:01:45.949520  130544 node_ready.go:35] waiting up to 6m0s for node "ha-064080-m02" to be "Ready" ...
	I0617 11:01:45.949628  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:45.949640  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:45.949652  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:45.949660  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:45.970876  130544 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0617 11:01:46.449797  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:46.449828  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:46.449840  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:46.449848  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:46.453997  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:01:46.950198  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:46.950219  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:46.950227  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:46.950231  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:46.953940  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:47.449888  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:47.449917  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:47.449929  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:47.449935  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:47.465655  130544 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0617 11:01:47.949969  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:47.949988  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:47.949996  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:47.950000  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:47.953535  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:47.954377  130544 node_ready.go:53] node "ha-064080-m02" has status "Ready":"False"
	I0617 11:01:48.450435  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:48.450458  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:48.450466  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:48.450470  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:48.454062  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:48.949824  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:48.949850  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:48.949860  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:48.949865  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:48.953121  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:49.450048  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:49.450072  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.450080  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.450085  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.453220  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:49.453993  130544 node_ready.go:49] node "ha-064080-m02" has status "Ready":"True"
	I0617 11:01:49.454018  130544 node_ready.go:38] duration metric: took 3.504478677s for node "ha-064080-m02" to be "Ready" ...
	I0617 11:01:49.454028  130544 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 11:01:49.454094  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:01:49.454106  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.454113  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.454116  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.458466  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:01:49.465298  130544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xbhnm" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:49.465386  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xbhnm
	I0617 11:01:49.465391  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.465399  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.465408  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.468371  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:49.469183  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:01:49.469199  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.469206  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.469210  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.471620  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:49.472293  130544 pod_ready.go:92] pod "coredns-7db6d8ff4d-xbhnm" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:49.472319  130544 pod_ready.go:81] duration metric: took 6.995145ms for pod "coredns-7db6d8ff4d-xbhnm" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:49.472332  130544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zv99k" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:49.472402  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zv99k
	I0617 11:01:49.472414  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.472423  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.472429  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.474934  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:49.475790  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:01:49.475816  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.475823  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.475827  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.478667  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:49.479182  130544 pod_ready.go:92] pod "coredns-7db6d8ff4d-zv99k" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:49.479195  130544 pod_ready.go:81] duration metric: took 6.852553ms for pod "coredns-7db6d8ff4d-zv99k" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:49.479203  130544 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:49.479273  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080
	I0617 11:01:49.479278  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.479289  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.479294  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.482883  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:49.483489  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:01:49.483507  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.483518  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.483525  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.486394  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:49.487253  130544 pod_ready.go:92] pod "etcd-ha-064080" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:49.487268  130544 pod_ready.go:81] duration metric: took 8.0594ms for pod "etcd-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:49.487276  130544 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:49.487321  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:49.487328  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.487335  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.487344  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.489651  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:49.490135  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:49.490148  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.490155  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.490160  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.492800  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:49.987630  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:49.987655  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.987663  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.987668  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.991042  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:49.991658  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:49.991674  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.991721  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.991730  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.994341  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:50.487857  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:50.487885  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:50.487897  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:50.487901  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:50.491419  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:50.492197  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:50.492216  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:50.492224  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:50.492230  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:50.494940  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:50.988424  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:50.988448  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:50.988456  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:50.988461  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:50.992061  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:50.992789  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:50.992806  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:50.992814  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:50.992821  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:50.995445  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:51.488462  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:51.488486  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:51.488494  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:51.488498  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:51.492089  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:51.492686  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:51.492704  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:51.492715  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:51.492720  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:51.495342  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:51.495918  130544 pod_ready.go:102] pod "etcd-ha-064080-m02" in "kube-system" namespace has status "Ready":"False"
	I0617 11:01:51.987882  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:51.987907  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:51.987916  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:51.987920  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:51.991820  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:51.992597  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:51.992617  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:51.992628  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:51.992635  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:51.995407  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:52.487478  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:52.487502  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:52.487510  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:52.487515  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:52.491082  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:52.491647  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:52.491664  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:52.491674  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:52.491681  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:52.494596  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:52.988485  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:52.988509  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:52.988516  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:52.988519  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:52.991880  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:52.992521  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:52.992539  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:52.992547  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:52.992551  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:52.995561  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:53.487783  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:53.487932  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:53.487961  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:53.487974  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:53.495150  130544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0617 11:01:53.495895  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:53.495915  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:53.495926  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:53.495931  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:53.498410  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:53.499001  130544 pod_ready.go:102] pod "etcd-ha-064080-m02" in "kube-system" namespace has status "Ready":"False"
	I0617 11:01:53.988489  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:53.988514  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:53.988522  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:53.988525  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:53.991851  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:53.992680  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:53.992695  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:53.992702  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:53.992705  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:53.995550  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:54.487559  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:54.487588  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:54.487600  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:54.487606  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:54.491555  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:54.492307  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:54.492326  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:54.492338  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:54.492342  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:54.495247  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:54.988333  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:54.988356  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:54.988364  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:54.988368  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:54.992300  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:54.992997  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:54.993014  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:54.993023  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:54.993028  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:54.996050  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:55.488092  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:55.488123  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:55.488134  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:55.488141  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:55.492093  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:55.492821  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:55.492836  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:55.492842  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:55.492846  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:55.495405  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:55.988039  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:55.988063  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:55.988071  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:55.988079  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:55.991389  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:55.992039  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:55.992067  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:55.992074  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:55.992078  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:55.994885  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:55.995526  130544 pod_ready.go:102] pod "etcd-ha-064080-m02" in "kube-system" namespace has status "Ready":"False"
	I0617 11:01:56.487639  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:56.487665  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:56.487673  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:56.487677  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:56.491096  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:56.491968  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:56.491983  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:56.491990  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:56.491995  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:56.494815  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:56.988277  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:56.988303  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:56.988314  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:56.988319  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:56.991887  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:56.992476  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:56.992497  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:56.992505  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:56.992509  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:56.995294  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:56.995933  130544 pod_ready.go:92] pod "etcd-ha-064080-m02" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:56.995955  130544 pod_ready.go:81] duration metric: took 7.508673118s for pod "etcd-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:56.995969  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:56.996021  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-064080
	I0617 11:01:56.996029  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:56.996036  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:56.996039  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:56.998638  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:56.999493  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:01:56.999511  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:56.999522  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:56.999528  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.002100  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:57.002672  130544 pod_ready.go:92] pod "kube-apiserver-ha-064080" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:57.002693  130544 pod_ready.go:81] duration metric: took 6.717759ms for pod "kube-apiserver-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:57.002702  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:57.002758  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-064080-m02
	I0617 11:01:57.002765  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.002774  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.002778  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.005183  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:57.005952  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:57.005968  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.005977  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.005982  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.008293  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:57.503432  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-064080-m02
	I0617 11:01:57.503467  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.503479  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.503488  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.506159  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:57.506738  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:57.506751  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.506758  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.506761  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.509582  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:57.510181  130544 pod_ready.go:92] pod "kube-apiserver-ha-064080-m02" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:57.510198  130544 pod_ready.go:81] duration metric: took 507.48767ms for pod "kube-apiserver-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:57.510207  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:57.510270  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-064080
	I0617 11:01:57.510278  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.510285  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.510289  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.512876  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:57.513789  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:01:57.513803  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.513811  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.513816  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.516015  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:57.516510  130544 pod_ready.go:92] pod "kube-controller-manager-ha-064080" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:57.516523  130544 pod_ready.go:81] duration metric: took 6.310329ms for pod "kube-controller-manager-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:57.516531  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:57.516588  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-064080-m02
	I0617 11:01:57.516596  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.516603  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.516607  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.519029  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:57.519866  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:57.519882  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.519888  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.519893  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.522012  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:57.522722  130544 pod_ready.go:92] pod "kube-controller-manager-ha-064080-m02" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:57.522737  130544 pod_ready.go:81] duration metric: took 6.199889ms for pod "kube-controller-manager-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:57.522745  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dd48x" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:57.650378  130544 request.go:629] Waited for 127.57795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dd48x
	I0617 11:01:57.650468  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dd48x
	I0617 11:01:57.650481  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.650492  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.650501  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.653664  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:57.850755  130544 request.go:629] Waited for 196.379696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:01:57.850850  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:01:57.850858  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.850868  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.850876  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.854153  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:57.854713  130544 pod_ready.go:92] pod "kube-proxy-dd48x" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:57.854737  130544 pod_ready.go:81] duration metric: took 331.985119ms for pod "kube-proxy-dd48x" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:57.854751  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l55dg" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:58.050889  130544 request.go:629] Waited for 196.050442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l55dg
	I0617 11:01:58.050991  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l55dg
	I0617 11:01:58.051002  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:58.051009  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:58.051016  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:58.054425  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:58.250603  130544 request.go:629] Waited for 195.380006ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:58.250663  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:58.250668  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:58.250675  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:58.250679  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:58.254094  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:58.256187  130544 pod_ready.go:92] pod "kube-proxy-l55dg" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:58.256218  130544 pod_ready.go:81] duration metric: took 401.459211ms for pod "kube-proxy-l55dg" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:58.256233  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:58.450176  130544 request.go:629] Waited for 193.855201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-064080
	I0617 11:01:58.450258  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-064080
	I0617 11:01:58.450263  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:58.450271  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:58.450278  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:58.453333  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:58.650293  130544 request.go:629] Waited for 196.2939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:01:58.650354  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:01:58.650359  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:58.650368  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:58.650374  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:58.653268  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:58.653653  130544 pod_ready.go:92] pod "kube-scheduler-ha-064080" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:58.653671  130544 pod_ready.go:81] duration metric: took 397.430801ms for pod "kube-scheduler-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:58.653683  130544 pod_ready.go:38] duration metric: took 9.199642443s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 11:01:58.653706  130544 api_server.go:52] waiting for apiserver process to appear ...
	I0617 11:01:58.653760  130544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:01:58.670148  130544 api_server.go:72] duration metric: took 13.027718259s to wait for apiserver process to appear ...
	I0617 11:01:58.670177  130544 api_server.go:88] waiting for apiserver healthz status ...
	I0617 11:01:58.670201  130544 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0617 11:01:58.674870  130544 api_server.go:279] https://192.168.39.134:8443/healthz returned 200:
	ok
	I0617 11:01:58.674927  130544 round_trippers.go:463] GET https://192.168.39.134:8443/version
	I0617 11:01:58.674934  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:58.674941  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:58.674947  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:58.676006  130544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0617 11:01:58.676183  130544 api_server.go:141] control plane version: v1.30.1
	I0617 11:01:58.676202  130544 api_server.go:131] duration metric: took 6.019302ms to wait for apiserver health ...
	I0617 11:01:58.676209  130544 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 11:01:58.850677  130544 request.go:629] Waited for 174.389993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:01:58.850742  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:01:58.850748  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:58.850755  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:58.850759  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:58.859894  130544 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0617 11:01:58.865322  130544 system_pods.go:59] 17 kube-system pods found
	I0617 11:01:58.865352  130544 system_pods.go:61] "coredns-7db6d8ff4d-xbhnm" [be37a6ec-2a49-4a56-b8a3-0da865edb05d] Running
	I0617 11:01:58.865357  130544 system_pods.go:61] "coredns-7db6d8ff4d-zv99k" [c2453fd4-894d-4212-bc48-1803e28ddba8] Running
	I0617 11:01:58.865361  130544 system_pods.go:61] "etcd-ha-064080" [f7a1e80e-8ebc-496b-8919-ebf99a8dd4b4] Running
	I0617 11:01:58.865364  130544 system_pods.go:61] "etcd-ha-064080-m02" [7de6c88f-a0b9-4fa3-b4aa-e964191aa4e5] Running
	I0617 11:01:58.865369  130544 system_pods.go:61] "kindnet-48mb7" [67422049-6637-4ca3-8bd1-2b47a265829d] Running
	I0617 11:01:58.865372  130544 system_pods.go:61] "kindnet-7cqp4" [f4671f39-ca07-4520-bc35-dce8e53318de] Running
	I0617 11:01:58.865375  130544 system_pods.go:61] "kube-apiserver-ha-064080" [fd326be1-2b78-41e8-9b57-138ffdadac71] Running
	I0617 11:01:58.865380  130544 system_pods.go:61] "kube-apiserver-ha-064080-m02" [74164e88-591d-490e-b4f9-1d8ea635cd2d] Running
	I0617 11:01:58.865383  130544 system_pods.go:61] "kube-controller-manager-ha-064080" [142a6154-fcbf-4d5d-a222-21d1b46720cb] Running
	I0617 11:01:58.865386  130544 system_pods.go:61] "kube-controller-manager-ha-064080-m02" [f096dd77-2f79-479e-bd06-b02c942200c6] Running
	I0617 11:01:58.865389  130544 system_pods.go:61] "kube-proxy-dd48x" [e1bd1d47-a8a5-47a5-820c-dd86f7ea7765] Running
	I0617 11:01:58.865392  130544 system_pods.go:61] "kube-proxy-l55dg" [1d827d6c-0432-4162-924c-d43b66b08c26] Running
	I0617 11:01:58.865395  130544 system_pods.go:61] "kube-scheduler-ha-064080" [f9e62714-7ec7-47a9-ab16-6afada18c6d8] Running
	I0617 11:01:58.865401  130544 system_pods.go:61] "kube-scheduler-ha-064080-m02" [ec804903-8a64-4a3d-8843-9d2ec21d7158] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 11:01:58.865407  130544 system_pods.go:61] "kube-vip-ha-064080" [6b9259b1-ee46-4493-ba10-dcb32da03f57] Running
	I0617 11:01:58.865412  130544 system_pods.go:61] "kube-vip-ha-064080-m02" [8a4ad095-97bf-4a1f-8579-9e6a564f24ed] Running
	I0617 11:01:58.865415  130544 system_pods.go:61] "storage-provisioner" [5646fca8-9ebc-47c1-b5ff-c87b0ed800d8] Running
	I0617 11:01:58.865421  130544 system_pods.go:74] duration metric: took 189.206494ms to wait for pod list to return data ...
	I0617 11:01:58.865430  130544 default_sa.go:34] waiting for default service account to be created ...
	I0617 11:01:59.050832  130544 request.go:629] Waited for 185.308848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/default/serviceaccounts
	I0617 11:01:59.050891  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/default/serviceaccounts
	I0617 11:01:59.050896  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:59.050904  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:59.050908  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:59.053737  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:59.053973  130544 default_sa.go:45] found service account: "default"
	I0617 11:01:59.053992  130544 default_sa.go:55] duration metric: took 188.556002ms for default service account to be created ...
	I0617 11:01:59.054000  130544 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 11:01:59.250510  130544 request.go:629] Waited for 196.416211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:01:59.250592  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:01:59.250601  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:59.250611  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:59.250617  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:59.255603  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:01:59.259996  130544 system_pods.go:86] 17 kube-system pods found
	I0617 11:01:59.260019  130544 system_pods.go:89] "coredns-7db6d8ff4d-xbhnm" [be37a6ec-2a49-4a56-b8a3-0da865edb05d] Running
	I0617 11:01:59.260025  130544 system_pods.go:89] "coredns-7db6d8ff4d-zv99k" [c2453fd4-894d-4212-bc48-1803e28ddba8] Running
	I0617 11:01:59.260029  130544 system_pods.go:89] "etcd-ha-064080" [f7a1e80e-8ebc-496b-8919-ebf99a8dd4b4] Running
	I0617 11:01:59.260033  130544 system_pods.go:89] "etcd-ha-064080-m02" [7de6c88f-a0b9-4fa3-b4aa-e964191aa4e5] Running
	I0617 11:01:59.260037  130544 system_pods.go:89] "kindnet-48mb7" [67422049-6637-4ca3-8bd1-2b47a265829d] Running
	I0617 11:01:59.260041  130544 system_pods.go:89] "kindnet-7cqp4" [f4671f39-ca07-4520-bc35-dce8e53318de] Running
	I0617 11:01:59.260045  130544 system_pods.go:89] "kube-apiserver-ha-064080" [fd326be1-2b78-41e8-9b57-138ffdadac71] Running
	I0617 11:01:59.260049  130544 system_pods.go:89] "kube-apiserver-ha-064080-m02" [74164e88-591d-490e-b4f9-1d8ea635cd2d] Running
	I0617 11:01:59.260053  130544 system_pods.go:89] "kube-controller-manager-ha-064080" [142a6154-fcbf-4d5d-a222-21d1b46720cb] Running
	I0617 11:01:59.260058  130544 system_pods.go:89] "kube-controller-manager-ha-064080-m02" [f096dd77-2f79-479e-bd06-b02c942200c6] Running
	I0617 11:01:59.260062  130544 system_pods.go:89] "kube-proxy-dd48x" [e1bd1d47-a8a5-47a5-820c-dd86f7ea7765] Running
	I0617 11:01:59.260067  130544 system_pods.go:89] "kube-proxy-l55dg" [1d827d6c-0432-4162-924c-d43b66b08c26] Running
	I0617 11:01:59.260074  130544 system_pods.go:89] "kube-scheduler-ha-064080" [f9e62714-7ec7-47a9-ab16-6afada18c6d8] Running
	I0617 11:01:59.260085  130544 system_pods.go:89] "kube-scheduler-ha-064080-m02" [ec804903-8a64-4a3d-8843-9d2ec21d7158] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 11:01:59.260092  130544 system_pods.go:89] "kube-vip-ha-064080" [6b9259b1-ee46-4493-ba10-dcb32da03f57] Running
	I0617 11:01:59.260098  130544 system_pods.go:89] "kube-vip-ha-064080-m02" [8a4ad095-97bf-4a1f-8579-9e6a564f24ed] Running
	I0617 11:01:59.260102  130544 system_pods.go:89] "storage-provisioner" [5646fca8-9ebc-47c1-b5ff-c87b0ed800d8] Running
	I0617 11:01:59.260109  130544 system_pods.go:126] duration metric: took 206.102612ms to wait for k8s-apps to be running ...
	I0617 11:01:59.260118  130544 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 11:01:59.260160  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:01:59.276507  130544 system_svc.go:56] duration metric: took 16.376864ms WaitForService to wait for kubelet
	I0617 11:01:59.276538  130544 kubeadm.go:576] duration metric: took 13.634112303s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:01:59.276563  130544 node_conditions.go:102] verifying NodePressure condition ...
	I0617 11:01:59.450482  130544 request.go:629] Waited for 173.83515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes
	I0617 11:01:59.450553  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes
	I0617 11:01:59.450560  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:59.450567  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:59.450577  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:59.454233  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:59.454924  130544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 11:01:59.454961  130544 node_conditions.go:123] node cpu capacity is 2
	I0617 11:01:59.454978  130544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 11:01:59.454983  130544 node_conditions.go:123] node cpu capacity is 2
	I0617 11:01:59.454989  130544 node_conditions.go:105] duration metric: took 178.4202ms to run NodePressure ...
	I0617 11:01:59.455005  130544 start.go:240] waiting for startup goroutines ...
	I0617 11:01:59.455037  130544 start.go:254] writing updated cluster config ...
	I0617 11:01:59.457035  130544 out.go:177] 
	I0617 11:01:59.458351  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:01:59.458437  130544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 11:01:59.459860  130544 out.go:177] * Starting "ha-064080-m03" control-plane node in "ha-064080" cluster
	I0617 11:01:59.460990  130544 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:01:59.461013  130544 cache.go:56] Caching tarball of preloaded images
	I0617 11:01:59.461124  130544 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:01:59.461137  130544 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 11:01:59.461218  130544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 11:01:59.461372  130544 start.go:360] acquireMachinesLock for ha-064080-m03: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:01:59.461415  130544 start.go:364] duration metric: took 23.722µs to acquireMachinesLock for "ha-064080-m03"
	I0617 11:01:59.461432  130544 start.go:93] Provisioning new machine with config: &{Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:01:59.461526  130544 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0617 11:01:59.462923  130544 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 11:01:59.463000  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:01:59.463046  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:01:59.478511  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35945
	I0617 11:01:59.478946  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:01:59.479469  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:01:59.479491  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:01:59.479876  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:01:59.480067  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetMachineName
	I0617 11:01:59.480259  130544 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:01:59.480435  130544 start.go:159] libmachine.API.Create for "ha-064080" (driver="kvm2")
	I0617 11:01:59.480463  130544 client.go:168] LocalClient.Create starting
	I0617 11:01:59.480498  130544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem
	I0617 11:01:59.480535  130544 main.go:141] libmachine: Decoding PEM data...
	I0617 11:01:59.480556  130544 main.go:141] libmachine: Parsing certificate...
	I0617 11:01:59.480634  130544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem
	I0617 11:01:59.480660  130544 main.go:141] libmachine: Decoding PEM data...
	I0617 11:01:59.480677  130544 main.go:141] libmachine: Parsing certificate...
	I0617 11:01:59.480702  130544 main.go:141] libmachine: Running pre-create checks...
	I0617 11:01:59.480713  130544 main.go:141] libmachine: (ha-064080-m03) Calling .PreCreateCheck
	I0617 11:01:59.480887  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetConfigRaw
	I0617 11:01:59.481280  130544 main.go:141] libmachine: Creating machine...
	I0617 11:01:59.481293  130544 main.go:141] libmachine: (ha-064080-m03) Calling .Create
	I0617 11:01:59.481419  130544 main.go:141] libmachine: (ha-064080-m03) Creating KVM machine...
	I0617 11:01:59.482671  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found existing default KVM network
	I0617 11:01:59.482871  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found existing private KVM network mk-ha-064080
	I0617 11:01:59.482981  130544 main.go:141] libmachine: (ha-064080-m03) Setting up store path in /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03 ...
	I0617 11:01:59.483003  130544 main.go:141] libmachine: (ha-064080-m03) Building disk image from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso
	I0617 11:01:59.483062  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:01:59.482961  131318 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:01:59.483160  130544 main.go:141] libmachine: (ha-064080-m03) Downloading /home/jenkins/minikube-integration/19084-112967/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0617 11:01:59.715675  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:01:59.715521  131318 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa...
	I0617 11:01:59.785679  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:01:59.785539  131318 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/ha-064080-m03.rawdisk...
	I0617 11:01:59.785721  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Writing magic tar header
	I0617 11:01:59.785769  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Writing SSH key tar header
	I0617 11:01:59.785805  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:01:59.785696  131318 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03 ...
	I0617 11:01:59.785828  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03
	I0617 11:01:59.785843  130544 main.go:141] libmachine: (ha-064080-m03) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03 (perms=drwx------)
	I0617 11:01:59.785851  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines
	I0617 11:01:59.785869  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:01:59.785877  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967
	I0617 11:01:59.785892  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0617 11:01:59.785904  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Checking permissions on dir: /home/jenkins
	I0617 11:01:59.785919  130544 main.go:141] libmachine: (ha-064080-m03) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines (perms=drwxr-xr-x)
	I0617 11:01:59.785931  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Checking permissions on dir: /home
	I0617 11:01:59.785946  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Skipping /home - not owner
	I0617 11:01:59.785963  130544 main.go:141] libmachine: (ha-064080-m03) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube (perms=drwxr-xr-x)
	I0617 11:01:59.785975  130544 main.go:141] libmachine: (ha-064080-m03) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967 (perms=drwxrwxr-x)
	I0617 11:01:59.785991  130544 main.go:141] libmachine: (ha-064080-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0617 11:01:59.786003  130544 main.go:141] libmachine: (ha-064080-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0617 11:01:59.786019  130544 main.go:141] libmachine: (ha-064080-m03) Creating domain...
	I0617 11:01:59.786903  130544 main.go:141] libmachine: (ha-064080-m03) define libvirt domain using xml: 
	I0617 11:01:59.786925  130544 main.go:141] libmachine: (ha-064080-m03) <domain type='kvm'>
	I0617 11:01:59.786935  130544 main.go:141] libmachine: (ha-064080-m03)   <name>ha-064080-m03</name>
	I0617 11:01:59.786948  130544 main.go:141] libmachine: (ha-064080-m03)   <memory unit='MiB'>2200</memory>
	I0617 11:01:59.786956  130544 main.go:141] libmachine: (ha-064080-m03)   <vcpu>2</vcpu>
	I0617 11:01:59.786962  130544 main.go:141] libmachine: (ha-064080-m03)   <features>
	I0617 11:01:59.786971  130544 main.go:141] libmachine: (ha-064080-m03)     <acpi/>
	I0617 11:01:59.786976  130544 main.go:141] libmachine: (ha-064080-m03)     <apic/>
	I0617 11:01:59.786986  130544 main.go:141] libmachine: (ha-064080-m03)     <pae/>
	I0617 11:01:59.786992  130544 main.go:141] libmachine: (ha-064080-m03)     
	I0617 11:01:59.787003  130544 main.go:141] libmachine: (ha-064080-m03)   </features>
	I0617 11:01:59.787009  130544 main.go:141] libmachine: (ha-064080-m03)   <cpu mode='host-passthrough'>
	I0617 11:01:59.787016  130544 main.go:141] libmachine: (ha-064080-m03)   
	I0617 11:01:59.787023  130544 main.go:141] libmachine: (ha-064080-m03)   </cpu>
	I0617 11:01:59.787053  130544 main.go:141] libmachine: (ha-064080-m03)   <os>
	I0617 11:01:59.787083  130544 main.go:141] libmachine: (ha-064080-m03)     <type>hvm</type>
	I0617 11:01:59.787093  130544 main.go:141] libmachine: (ha-064080-m03)     <boot dev='cdrom'/>
	I0617 11:01:59.787107  130544 main.go:141] libmachine: (ha-064080-m03)     <boot dev='hd'/>
	I0617 11:01:59.787118  130544 main.go:141] libmachine: (ha-064080-m03)     <bootmenu enable='no'/>
	I0617 11:01:59.787125  130544 main.go:141] libmachine: (ha-064080-m03)   </os>
	I0617 11:01:59.787132  130544 main.go:141] libmachine: (ha-064080-m03)   <devices>
	I0617 11:01:59.787138  130544 main.go:141] libmachine: (ha-064080-m03)     <disk type='file' device='cdrom'>
	I0617 11:01:59.787147  130544 main.go:141] libmachine: (ha-064080-m03)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/boot2docker.iso'/>
	I0617 11:01:59.787152  130544 main.go:141] libmachine: (ha-064080-m03)       <target dev='hdc' bus='scsi'/>
	I0617 11:01:59.787161  130544 main.go:141] libmachine: (ha-064080-m03)       <readonly/>
	I0617 11:01:59.787165  130544 main.go:141] libmachine: (ha-064080-m03)     </disk>
	I0617 11:01:59.787176  130544 main.go:141] libmachine: (ha-064080-m03)     <disk type='file' device='disk'>
	I0617 11:01:59.787185  130544 main.go:141] libmachine: (ha-064080-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0617 11:01:59.787200  130544 main.go:141] libmachine: (ha-064080-m03)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/ha-064080-m03.rawdisk'/>
	I0617 11:01:59.787212  130544 main.go:141] libmachine: (ha-064080-m03)       <target dev='hda' bus='virtio'/>
	I0617 11:01:59.787222  130544 main.go:141] libmachine: (ha-064080-m03)     </disk>
	I0617 11:01:59.787231  130544 main.go:141] libmachine: (ha-064080-m03)     <interface type='network'>
	I0617 11:01:59.787240  130544 main.go:141] libmachine: (ha-064080-m03)       <source network='mk-ha-064080'/>
	I0617 11:01:59.787254  130544 main.go:141] libmachine: (ha-064080-m03)       <model type='virtio'/>
	I0617 11:01:59.787273  130544 main.go:141] libmachine: (ha-064080-m03)     </interface>
	I0617 11:01:59.787286  130544 main.go:141] libmachine: (ha-064080-m03)     <interface type='network'>
	I0617 11:01:59.787297  130544 main.go:141] libmachine: (ha-064080-m03)       <source network='default'/>
	I0617 11:01:59.787306  130544 main.go:141] libmachine: (ha-064080-m03)       <model type='virtio'/>
	I0617 11:01:59.787316  130544 main.go:141] libmachine: (ha-064080-m03)     </interface>
	I0617 11:01:59.787336  130544 main.go:141] libmachine: (ha-064080-m03)     <serial type='pty'>
	I0617 11:01:59.787355  130544 main.go:141] libmachine: (ha-064080-m03)       <target port='0'/>
	I0617 11:01:59.787365  130544 main.go:141] libmachine: (ha-064080-m03)     </serial>
	I0617 11:01:59.787386  130544 main.go:141] libmachine: (ha-064080-m03)     <console type='pty'>
	I0617 11:01:59.787400  130544 main.go:141] libmachine: (ha-064080-m03)       <target type='serial' port='0'/>
	I0617 11:01:59.787410  130544 main.go:141] libmachine: (ha-064080-m03)     </console>
	I0617 11:01:59.787418  130544 main.go:141] libmachine: (ha-064080-m03)     <rng model='virtio'>
	I0617 11:01:59.787430  130544 main.go:141] libmachine: (ha-064080-m03)       <backend model='random'>/dev/random</backend>
	I0617 11:01:59.787477  130544 main.go:141] libmachine: (ha-064080-m03)     </rng>
	I0617 11:01:59.787501  130544 main.go:141] libmachine: (ha-064080-m03)     
	I0617 11:01:59.787515  130544 main.go:141] libmachine: (ha-064080-m03)     
	I0617 11:01:59.787526  130544 main.go:141] libmachine: (ha-064080-m03)   </devices>
	I0617 11:01:59.787539  130544 main.go:141] libmachine: (ha-064080-m03) </domain>
	I0617 11:01:59.787550  130544 main.go:141] libmachine: (ha-064080-m03) 
	I0617 11:01:59.793962  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:9d:11:91 in network default
	I0617 11:01:59.794641  130544 main.go:141] libmachine: (ha-064080-m03) Ensuring networks are active...
	I0617 11:01:59.794665  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:01:59.795430  130544 main.go:141] libmachine: (ha-064080-m03) Ensuring network default is active
	I0617 11:01:59.795789  130544 main.go:141] libmachine: (ha-064080-m03) Ensuring network mk-ha-064080 is active
	I0617 11:01:59.796164  130544 main.go:141] libmachine: (ha-064080-m03) Getting domain xml...
	I0617 11:01:59.796910  130544 main.go:141] libmachine: (ha-064080-m03) Creating domain...
	I0617 11:02:01.039485  130544 main.go:141] libmachine: (ha-064080-m03) Waiting to get IP...
	I0617 11:02:01.040173  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:01.040567  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:01.040617  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:01.040560  131318 retry.go:31] will retry after 256.954057ms: waiting for machine to come up
	I0617 11:02:01.299313  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:01.299735  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:01.299760  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:01.299698  131318 retry.go:31] will retry after 349.087473ms: waiting for machine to come up
	I0617 11:02:01.650272  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:01.650691  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:01.650718  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:01.650648  131318 retry.go:31] will retry after 430.560067ms: waiting for machine to come up
	I0617 11:02:02.083211  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:02.083690  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:02.083728  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:02.083658  131318 retry.go:31] will retry after 607.889522ms: waiting for machine to come up
	I0617 11:02:02.693338  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:02.693773  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:02.693807  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:02.693723  131318 retry.go:31] will retry after 468.818335ms: waiting for machine to come up
	I0617 11:02:03.164451  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:03.164847  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:03.164876  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:03.164787  131318 retry.go:31] will retry after 935.496879ms: waiting for machine to come up
	I0617 11:02:04.101800  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:04.102171  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:04.102201  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:04.102117  131318 retry.go:31] will retry after 1.166024389s: waiting for machine to come up
	I0617 11:02:05.269896  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:05.270443  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:05.270472  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:05.270400  131318 retry.go:31] will retry after 1.125834158s: waiting for machine to come up
	I0617 11:02:06.397857  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:06.398432  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:06.398461  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:06.398384  131318 retry.go:31] will retry after 1.40014932s: waiting for machine to come up
	I0617 11:02:07.800662  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:07.801238  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:07.801265  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:07.801142  131318 retry.go:31] will retry after 2.098669841s: waiting for machine to come up
	I0617 11:02:09.901171  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:09.901676  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:09.901708  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:09.901627  131318 retry.go:31] will retry after 2.799457249s: waiting for machine to come up
	I0617 11:02:12.704433  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:12.704852  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:12.704873  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:12.704820  131318 retry.go:31] will retry after 2.829077131s: waiting for machine to come up
	I0617 11:02:15.535995  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:15.536390  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:15.536412  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:15.536359  131318 retry.go:31] will retry after 2.775553712s: waiting for machine to come up
	I0617 11:02:18.314893  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:18.315231  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:18.315260  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:18.315207  131318 retry.go:31] will retry after 5.321724574s: waiting for machine to come up
	I0617 11:02:23.641110  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:23.641531  130544 main.go:141] libmachine: (ha-064080-m03) Found IP for machine: 192.168.39.168
	I0617 11:02:23.641560  130544 main.go:141] libmachine: (ha-064080-m03) Reserving static IP address...
	I0617 11:02:23.641577  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has current primary IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:23.642007  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find host DHCP lease matching {name: "ha-064080-m03", mac: "52:54:00:97:31:82", ip: "192.168.39.168"} in network mk-ha-064080
	I0617 11:02:23.718332  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Getting to WaitForSSH function...
	I0617 11:02:23.718366  130544 main.go:141] libmachine: (ha-064080-m03) Reserved static IP address: 192.168.39.168
	I0617 11:02:23.718417  130544 main.go:141] libmachine: (ha-064080-m03) Waiting for SSH to be available...
	I0617 11:02:23.720882  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:23.721268  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:minikube Clientid:01:52:54:00:97:31:82}
	I0617 11:02:23.721302  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:23.721524  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Using SSH client type: external
	I0617 11:02:23.721555  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa (-rw-------)
	I0617 11:02:23.721585  130544 main.go:141] libmachine: (ha-064080-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.168 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 11:02:23.721600  130544 main.go:141] libmachine: (ha-064080-m03) DBG | About to run SSH command:
	I0617 11:02:23.721618  130544 main.go:141] libmachine: (ha-064080-m03) DBG | exit 0
	I0617 11:02:23.843671  130544 main.go:141] libmachine: (ha-064080-m03) DBG | SSH cmd err, output: <nil>: 
	I0617 11:02:23.843974  130544 main.go:141] libmachine: (ha-064080-m03) KVM machine creation complete!
	I0617 11:02:23.844253  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetConfigRaw
	I0617 11:02:23.844765  130544 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:02:23.844966  130544 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:02:23.845164  130544 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0617 11:02:23.845179  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetState
	I0617 11:02:23.846418  130544 main.go:141] libmachine: Detecting operating system of created instance...
	I0617 11:02:23.846434  130544 main.go:141] libmachine: Waiting for SSH to be available...
	I0617 11:02:23.846442  130544 main.go:141] libmachine: Getting to WaitForSSH function...
	I0617 11:02:23.846451  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:23.848936  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:23.849347  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:23.849373  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:23.849587  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:23.849800  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:23.849973  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:23.850131  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:23.850290  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:02:23.850597  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0617 11:02:23.850616  130544 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0617 11:02:23.950947  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:02:23.950975  130544 main.go:141] libmachine: Detecting the provisioner...
	I0617 11:02:23.950983  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:23.954086  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:23.954502  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:23.954532  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:23.954701  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:23.954917  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:23.955121  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:23.955279  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:23.955439  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:02:23.955640  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0617 11:02:23.955653  130544 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0617 11:02:24.060089  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0617 11:02:24.060155  130544 main.go:141] libmachine: found compatible host: buildroot
	I0617 11:02:24.060169  130544 main.go:141] libmachine: Provisioning with buildroot...
	I0617 11:02:24.060183  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetMachineName
	I0617 11:02:24.060445  130544 buildroot.go:166] provisioning hostname "ha-064080-m03"
	I0617 11:02:24.060477  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetMachineName
	I0617 11:02:24.060699  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:24.063129  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.063498  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:24.063519  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.063664  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:24.063868  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:24.064049  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:24.064234  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:24.064423  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:02:24.064624  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0617 11:02:24.064637  130544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-064080-m03 && echo "ha-064080-m03" | sudo tee /etc/hostname
	I0617 11:02:24.187321  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-064080-m03
	
	I0617 11:02:24.187346  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:24.190117  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.190508  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:24.190530  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.190733  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:24.190979  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:24.191207  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:24.191385  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:24.191589  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:02:24.191816  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0617 11:02:24.191849  130544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-064080-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-064080-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-064080-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 11:02:24.306947  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:02:24.306985  130544 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 11:02:24.307006  130544 buildroot.go:174] setting up certificates
	I0617 11:02:24.307024  130544 provision.go:84] configureAuth start
	I0617 11:02:24.307035  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetMachineName
	I0617 11:02:24.307388  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetIP
	I0617 11:02:24.310096  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.310550  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:24.310599  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.310881  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:24.312970  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.313309  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:24.313334  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.313496  130544 provision.go:143] copyHostCerts
	I0617 11:02:24.313535  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:02:24.313575  130544 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 11:02:24.313587  130544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:02:24.313661  130544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 11:02:24.313757  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:02:24.313800  130544 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 11:02:24.313810  130544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:02:24.313852  130544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 11:02:24.313916  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:02:24.313942  130544 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 11:02:24.313951  130544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:02:24.313985  130544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 11:02:24.314053  130544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.ha-064080-m03 san=[127.0.0.1 192.168.39.168 ha-064080-m03 localhost minikube]
	I0617 11:02:24.765321  130544 provision.go:177] copyRemoteCerts
	I0617 11:02:24.765392  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 11:02:24.765426  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:24.768433  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.768875  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:24.768901  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.769113  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:24.769297  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:24.769463  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:24.769577  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa Username:docker}
	I0617 11:02:24.849664  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0617 11:02:24.849742  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 11:02:24.874547  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0617 11:02:24.874638  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0617 11:02:24.899270  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0617 11:02:24.899357  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0617 11:02:24.924418  130544 provision.go:87] duration metric: took 617.379218ms to configureAuth
	I0617 11:02:24.924452  130544 buildroot.go:189] setting minikube options for container-runtime
	I0617 11:02:24.924770  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:02:24.924879  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:24.927703  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.928104  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:24.928137  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.928224  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:24.928474  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:24.928634  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:24.928833  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:24.929030  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:02:24.929224  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0617 11:02:24.929245  130544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 11:02:25.200352  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 11:02:25.200386  130544 main.go:141] libmachine: Checking connection to Docker...
	I0617 11:02:25.200395  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetURL
	I0617 11:02:25.201530  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Using libvirt version 6000000
	I0617 11:02:25.203830  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.204218  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:25.204249  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.204438  130544 main.go:141] libmachine: Docker is up and running!
	I0617 11:02:25.204458  130544 main.go:141] libmachine: Reticulating splines...
	I0617 11:02:25.204467  130544 client.go:171] duration metric: took 25.723991787s to LocalClient.Create
	I0617 11:02:25.204499  130544 start.go:167] duration metric: took 25.724065148s to libmachine.API.Create "ha-064080"
	I0617 11:02:25.204513  130544 start.go:293] postStartSetup for "ha-064080-m03" (driver="kvm2")
	I0617 11:02:25.204544  130544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 11:02:25.204569  130544 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:02:25.204850  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 11:02:25.204877  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:25.207140  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.207501  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:25.207528  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.207670  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:25.207859  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:25.208006  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:25.208126  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa Username:docker}
	I0617 11:02:25.289996  130544 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 11:02:25.294386  130544 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 11:02:25.294413  130544 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 11:02:25.294476  130544 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 11:02:25.294542  130544 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 11:02:25.294552  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /etc/ssl/certs/1201742.pem
	I0617 11:02:25.294632  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 11:02:25.303876  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:02:25.328687  130544 start.go:296] duration metric: took 124.142586ms for postStartSetup
	I0617 11:02:25.328741  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetConfigRaw
	I0617 11:02:25.329349  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetIP
	I0617 11:02:25.333130  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.333562  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:25.333593  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.333849  130544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 11:02:25.334050  130544 start.go:128] duration metric: took 25.872513014s to createHost
	I0617 11:02:25.334087  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:25.336268  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.336692  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:25.336720  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.336884  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:25.337070  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:25.337236  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:25.337374  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:25.337535  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:02:25.337715  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0617 11:02:25.337726  130544 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 11:02:25.440100  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718622145.416396329
	
	I0617 11:02:25.440123  130544 fix.go:216] guest clock: 1718622145.416396329
	I0617 11:02:25.440130  130544 fix.go:229] Guest: 2024-06-17 11:02:25.416396329 +0000 UTC Remote: 2024-06-17 11:02:25.334063285 +0000 UTC m=+152.840982290 (delta=82.333044ms)
	I0617 11:02:25.440149  130544 fix.go:200] guest clock delta is within tolerance: 82.333044ms
	I0617 11:02:25.440157  130544 start.go:83] releasing machines lock for "ha-064080-m03", held for 25.978732098s
	I0617 11:02:25.440178  130544 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:02:25.440409  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetIP
	I0617 11:02:25.442842  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.443279  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:25.443309  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.445756  130544 out.go:177] * Found network options:
	I0617 11:02:25.447161  130544 out.go:177]   - NO_PROXY=192.168.39.134,192.168.39.104
	W0617 11:02:25.448497  130544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0617 11:02:25.448529  130544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0617 11:02:25.448549  130544 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:02:25.449107  130544 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:02:25.449284  130544 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:02:25.449371  130544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 11:02:25.449399  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	W0617 11:02:25.449510  130544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0617 11:02:25.449537  130544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0617 11:02:25.449593  130544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 11:02:25.449615  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:25.452286  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.452380  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.452664  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:25.452717  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.452748  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:25.452764  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.452802  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:25.453034  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:25.453058  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:25.453238  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:25.453318  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:25.453411  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa Username:docker}
	I0617 11:02:25.453494  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:25.453676  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa Username:docker}
	I0617 11:02:25.689292  130544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 11:02:25.695882  130544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 11:02:25.695961  130544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 11:02:25.712650  130544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 11:02:25.712675  130544 start.go:494] detecting cgroup driver to use...
	I0617 11:02:25.712739  130544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 11:02:25.730961  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 11:02:25.746533  130544 docker.go:217] disabling cri-docker service (if available) ...
	I0617 11:02:25.746583  130544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 11:02:25.760480  130544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 11:02:25.774935  130544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 11:02:25.906162  130544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 11:02:26.058886  130544 docker.go:233] disabling docker service ...
	I0617 11:02:26.058962  130544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 11:02:26.073999  130544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 11:02:26.086932  130544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 11:02:26.228781  130544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 11:02:26.348538  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 11:02:26.364179  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 11:02:26.383382  130544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 11:02:26.383443  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:02:26.394142  130544 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 11:02:26.394197  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:02:26.405621  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:02:26.416482  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:02:26.427107  130544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 11:02:26.437920  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:02:26.448561  130544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:02:26.466142  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:02:26.476726  130544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 11:02:26.486167  130544 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 11:02:26.486215  130544 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 11:02:26.500347  130544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 11:02:26.510948  130544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:02:26.633016  130544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 11:02:26.786888  130544 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 11:02:26.786968  130544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 11:02:26.791686  130544 start.go:562] Will wait 60s for crictl version
	I0617 11:02:26.791748  130544 ssh_runner.go:195] Run: which crictl
	I0617 11:02:26.795634  130544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 11:02:26.837840  130544 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 11:02:26.837922  130544 ssh_runner.go:195] Run: crio --version
	I0617 11:02:26.869330  130544 ssh_runner.go:195] Run: crio --version
	I0617 11:02:26.902388  130544 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 11:02:26.903806  130544 out.go:177]   - env NO_PROXY=192.168.39.134
	I0617 11:02:26.905120  130544 out.go:177]   - env NO_PROXY=192.168.39.134,192.168.39.104
	I0617 11:02:26.906328  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetIP
	I0617 11:02:26.908830  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:26.909161  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:26.909192  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:26.909393  130544 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 11:02:26.913602  130544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:02:26.928465  130544 mustload.go:65] Loading cluster: ha-064080
	I0617 11:02:26.928699  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:02:26.929046  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:02:26.929094  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:02:26.944875  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41773
	I0617 11:02:26.945277  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:02:26.945774  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:02:26.945806  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:02:26.946180  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:02:26.946406  130544 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:02:26.947952  130544 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:02:26.948308  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:02:26.948355  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:02:26.963836  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0617 11:02:26.964205  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:02:26.964637  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:02:26.964655  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:02:26.964999  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:02:26.965184  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:02:26.965336  130544 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080 for IP: 192.168.39.168
	I0617 11:02:26.965349  130544 certs.go:194] generating shared ca certs ...
	I0617 11:02:26.965367  130544 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:02:26.965509  130544 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 11:02:26.965569  130544 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 11:02:26.965583  130544 certs.go:256] generating profile certs ...
	I0617 11:02:26.965682  130544 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.key
	I0617 11:02:26.965713  130544 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.5a42fcf3
	I0617 11:02:26.965734  130544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.5a42fcf3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.134 192.168.39.104 192.168.39.168 192.168.39.254]
	I0617 11:02:27.346654  130544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.5a42fcf3 ...
	I0617 11:02:27.346687  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.5a42fcf3: {Name:mkd4c6893142164db1329d97d9dea3d2cfee3f2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:02:27.346863  130544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.5a42fcf3 ...
	I0617 11:02:27.346877  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.5a42fcf3: {Name:mk595a3aab8d45ce8720d08cb91288e4dc42db0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:02:27.346949  130544 certs.go:381] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.5a42fcf3 -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt
	I0617 11:02:27.347091  130544 certs.go:385] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.5a42fcf3 -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key
	I0617 11:02:27.347224  130544 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key
	I0617 11:02:27.347242  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0617 11:02:27.347255  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0617 11:02:27.347268  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0617 11:02:27.347280  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0617 11:02:27.347291  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0617 11:02:27.347303  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0617 11:02:27.347315  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0617 11:02:27.347327  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0617 11:02:27.347371  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 11:02:27.347397  130544 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 11:02:27.347406  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 11:02:27.347427  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 11:02:27.347448  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 11:02:27.347486  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 11:02:27.347523  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:02:27.347547  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem -> /usr/share/ca-certificates/120174.pem
	I0617 11:02:27.347561  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /usr/share/ca-certificates/1201742.pem
	I0617 11:02:27.347574  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:02:27.347605  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:02:27.350599  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:02:27.351006  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:02:27.351029  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:02:27.351232  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:02:27.351485  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:02:27.351658  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:02:27.351837  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:02:27.423711  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0617 11:02:27.429061  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0617 11:02:27.441056  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0617 11:02:27.445587  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0617 11:02:27.457976  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0617 11:02:27.462152  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0617 11:02:27.473381  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0617 11:02:27.477655  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0617 11:02:27.488205  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0617 11:02:27.492291  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0617 11:02:27.503178  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0617 11:02:27.507954  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0617 11:02:27.519116  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 11:02:27.545769  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 11:02:27.570587  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 11:02:27.593992  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 11:02:27.620181  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0617 11:02:27.644500  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0617 11:02:27.670181  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 11:02:27.693656  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 11:02:27.718743  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 11:02:27.743939  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 11:02:27.769241  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 11:02:27.793600  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0617 11:02:27.809999  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0617 11:02:27.826764  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0617 11:02:27.843367  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0617 11:02:27.861074  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0617 11:02:27.877824  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0617 11:02:27.894136  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0617 11:02:27.910223  130544 ssh_runner.go:195] Run: openssl version
	I0617 11:02:27.916197  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 11:02:27.926817  130544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:02:27.931271  130544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:02:27.931334  130544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:02:27.937173  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 11:02:27.948023  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 11:02:27.958752  130544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 11:02:27.963195  130544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 11:02:27.963240  130544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 11:02:27.969255  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 11:02:27.981676  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 11:02:27.993230  130544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 11:02:27.998102  130544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 11:02:27.998141  130544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 11:02:28.004192  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 11:02:28.015790  130544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 11:02:28.020007  130544 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0617 11:02:28.020072  130544 kubeadm.go:928] updating node {m03 192.168.39.168 8443 v1.30.1 crio true true} ...
	I0617 11:02:28.020165  130544 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-064080-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 11:02:28.020193  130544 kube-vip.go:115] generating kube-vip config ...
	I0617 11:02:28.020225  130544 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0617 11:02:28.036731  130544 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0617 11:02:28.036788  130544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0617 11:02:28.036854  130544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 11:02:28.046754  130544 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0617 11:02:28.046811  130544 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0617 11:02:28.056894  130544 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0617 11:02:28.056915  130544 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0617 11:02:28.056924  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0617 11:02:28.056927  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0617 11:02:28.056938  130544 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0617 11:02:28.056993  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:02:28.057015  130544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0617 11:02:28.057015  130544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0617 11:02:28.074561  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0617 11:02:28.074594  130544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0617 11:02:28.074617  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0617 11:02:28.074643  130544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0617 11:02:28.074678  130544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0617 11:02:28.074675  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0617 11:02:28.097581  130544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0617 11:02:28.097617  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0617 11:02:28.975329  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0617 11:02:28.984902  130544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0617 11:02:29.002064  130544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 11:02:29.020433  130544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0617 11:02:29.038500  130544 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0617 11:02:29.042765  130544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:02:29.056272  130544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:02:29.170338  130544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:02:29.187243  130544 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:02:29.187679  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:02:29.187726  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:02:29.203199  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39109
	I0617 11:02:29.203699  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:02:29.204218  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:02:29.204240  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:02:29.204546  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:02:29.204729  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:02:29.204905  130544 start.go:316] joinCluster: &{Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:02:29.205076  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0617 11:02:29.205101  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:02:29.208123  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:02:29.208613  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:02:29.208647  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:02:29.208827  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:02:29.209010  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:02:29.209216  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:02:29.209368  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:02:29.376289  130544 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:02:29.376346  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vqckf0.7wgygn8yyryvkydn --discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-064080-m03 --control-plane --apiserver-advertise-address=192.168.39.168 --apiserver-bind-port=8443"
	I0617 11:02:53.758250  130544 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vqckf0.7wgygn8yyryvkydn --discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-064080-m03 --control-plane --apiserver-advertise-address=192.168.39.168 --apiserver-bind-port=8443": (24.381868631s)
	I0617 11:02:53.758292  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0617 11:02:54.363546  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-064080-m03 minikube.k8s.io/updated_at=2024_06_17T11_02_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6 minikube.k8s.io/name=ha-064080 minikube.k8s.io/primary=false
	I0617 11:02:54.502092  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-064080-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0617 11:02:54.621243  130544 start.go:318] duration metric: took 25.416333651s to joinCluster
	I0617 11:02:54.621344  130544 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:02:54.623072  130544 out.go:177] * Verifying Kubernetes components...
	I0617 11:02:54.621808  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:02:54.624356  130544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:02:54.928732  130544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:02:54.976589  130544 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:02:54.976821  130544 kapi.go:59] client config for ha-064080: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.crt", KeyFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.key", CAFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfaf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0617 11:02:54.976882  130544 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.134:8443
	I0617 11:02:54.977098  130544 node_ready.go:35] waiting up to 6m0s for node "ha-064080-m03" to be "Ready" ...
	I0617 11:02:54.977171  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:54.977177  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:54.977184  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:54.977190  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:54.980888  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:55.477835  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:55.477866  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:55.477878  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:55.477883  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:55.481461  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:55.977724  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:55.977748  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:55.977760  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:55.977764  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:55.983343  130544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0617 11:02:56.477632  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:56.477658  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:56.477668  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:56.477671  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:56.483165  130544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0617 11:02:56.977402  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:56.977423  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:56.977435  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:56.977439  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:56.981146  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:56.981717  130544 node_ready.go:53] node "ha-064080-m03" has status "Ready":"False"
	I0617 11:02:57.478133  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:57.478160  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:57.478169  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:57.478174  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:57.481394  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:57.977332  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:57.977357  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:57.977368  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:57.977373  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:57.980538  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:57.981163  130544 node_ready.go:49] node "ha-064080-m03" has status "Ready":"True"
	I0617 11:02:57.981181  130544 node_ready.go:38] duration metric: took 3.004068832s for node "ha-064080-m03" to be "Ready" ...
	I0617 11:02:57.981189  130544 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 11:02:57.981251  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:02:57.981260  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:57.981268  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:57.981273  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:57.988008  130544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0617 11:02:57.994247  130544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xbhnm" in "kube-system" namespace to be "Ready" ...
	I0617 11:02:57.994341  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xbhnm
	I0617 11:02:57.994349  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:57.994357  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:57.994361  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:57.997345  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:02:57.997924  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:02:57.997939  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:57.997946  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:57.997950  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.000731  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:02:58.001299  130544 pod_ready.go:92] pod "coredns-7db6d8ff4d-xbhnm" in "kube-system" namespace has status "Ready":"True"
	I0617 11:02:58.001317  130544 pod_ready.go:81] duration metric: took 7.043245ms for pod "coredns-7db6d8ff4d-xbhnm" in "kube-system" namespace to be "Ready" ...
	I0617 11:02:58.001326  130544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zv99k" in "kube-system" namespace to be "Ready" ...
	I0617 11:02:58.001380  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zv99k
	I0617 11:02:58.001387  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.001394  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.001399  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.004801  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:58.005785  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:02:58.005803  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.005810  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.005815  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.008950  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:58.009623  130544 pod_ready.go:92] pod "coredns-7db6d8ff4d-zv99k" in "kube-system" namespace has status "Ready":"True"
	I0617 11:02:58.009639  130544 pod_ready.go:81] duration metric: took 8.306009ms for pod "coredns-7db6d8ff4d-zv99k" in "kube-system" namespace to be "Ready" ...
	I0617 11:02:58.009648  130544 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:02:58.009709  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080
	I0617 11:02:58.009716  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.009722  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.009738  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.018113  130544 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0617 11:02:58.018873  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:02:58.018891  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.018899  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.018906  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.021503  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:02:58.022150  130544 pod_ready.go:92] pod "etcd-ha-064080" in "kube-system" namespace has status "Ready":"True"
	I0617 11:02:58.022172  130544 pod_ready.go:81] duration metric: took 12.51598ms for pod "etcd-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:02:58.022181  130544 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:02:58.022250  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:02:58.022259  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.022265  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.022270  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.025096  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:02:58.025830  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:02:58.025844  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.025851  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.025855  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.028549  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:02:58.029269  130544 pod_ready.go:92] pod "etcd-ha-064080-m02" in "kube-system" namespace has status "Ready":"True"
	I0617 11:02:58.029286  130544 pod_ready.go:81] duration metric: took 7.099151ms for pod "etcd-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:02:58.029295  130544 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-064080-m03" in "kube-system" namespace to be "Ready" ...
	I0617 11:02:58.177735  130544 request.go:629] Waited for 148.339851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:02:58.177823  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:02:58.177845  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.177856  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.177862  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.181053  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:58.378135  130544 request.go:629] Waited for 196.2227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:58.378216  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:58.378230  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.378243  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.378253  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.381451  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:58.577579  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:02:58.577605  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.577615  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.577618  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.581575  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:58.777922  130544 request.go:629] Waited for 195.390769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:58.778019  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:58.778034  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.778046  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.778052  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.781491  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:59.030332  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:02:59.030362  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:59.030370  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:59.030376  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:59.034505  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:02:59.178008  130544 request.go:629] Waited for 142.332037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:59.178089  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:59.178094  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:59.178104  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:59.178110  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:59.181625  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:59.530426  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:02:59.530449  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:59.530457  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:59.530462  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:59.534300  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:59.578306  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:59.578330  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:59.578339  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:59.578343  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:59.581973  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:00.029789  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:00.029813  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:00.029822  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:00.029830  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:00.034036  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:03:00.034811  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:00.034829  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:00.034839  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:00.034843  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:00.038006  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:00.038714  130544 pod_ready.go:102] pod "etcd-ha-064080-m03" in "kube-system" namespace has status "Ready":"False"
	I0617 11:03:00.529993  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:00.530018  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:00.530026  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:00.530031  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:00.533373  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:00.534207  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:00.534223  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:00.534230  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:00.534233  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:00.537139  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:01.030160  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:01.030191  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:01.030202  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:01.030207  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:01.033797  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:01.034766  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:01.034783  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:01.034790  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:01.034793  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:01.037769  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:01.529757  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:01.529783  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:01.529794  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:01.529800  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:01.533251  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:01.533916  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:01.533936  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:01.533946  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:01.533951  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:01.536991  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:02.030427  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:02.030452  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:02.030460  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:02.030464  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:02.034591  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:03:02.035319  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:02.035333  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:02.035340  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:02.035345  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:02.038729  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:02.039365  130544 pod_ready.go:102] pod "etcd-ha-064080-m03" in "kube-system" namespace has status "Ready":"False"
	I0617 11:03:02.529915  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:02.531841  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:02.531860  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:02.531868  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:02.535304  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:02.536246  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:02.536262  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:02.536269  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:02.536273  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:02.539028  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:03.030120  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:03.030142  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:03.030153  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:03.030160  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:03.033605  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:03.034300  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:03.034317  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:03.034324  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:03.034328  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:03.036991  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:03.529561  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:03.529583  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:03.529592  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:03.529597  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:03.532466  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:03.533366  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:03.533379  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:03.533385  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:03.533388  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:03.536103  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:04.030080  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:04.030103  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:04.030111  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:04.030115  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:04.033835  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:04.034519  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:04.034537  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:04.034544  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:04.034549  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:04.037538  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:04.530324  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:04.530350  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:04.530361  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:04.530367  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:04.534457  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:03:04.535199  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:04.535215  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:04.535223  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:04.535228  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:04.538147  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:04.538657  130544 pod_ready.go:102] pod "etcd-ha-064080-m03" in "kube-system" namespace has status "Ready":"False"
	I0617 11:03:05.030260  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:05.030286  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:05.030296  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:05.030300  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:05.036759  130544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0617 11:03:05.037325  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:05.037339  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:05.037347  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:05.037353  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:05.040015  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:05.529797  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:05.529820  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:05.529828  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:05.529832  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:05.533167  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:05.533782  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:05.533801  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:05.533811  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:05.533816  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:05.536491  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:06.029485  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:06.029511  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:06.029519  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:06.029524  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:06.032746  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:06.033527  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:06.033543  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:06.033550  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:06.033553  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:06.036533  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:06.530408  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:06.530433  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:06.530443  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:06.530450  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:06.534403  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:06.535100  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:06.535117  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:06.535125  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:06.535128  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:06.538264  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:06.538801  130544 pod_ready.go:102] pod "etcd-ha-064080-m03" in "kube-system" namespace has status "Ready":"False"
	I0617 11:03:07.029776  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:07.029806  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:07.029815  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:07.029819  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:07.033711  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:07.034504  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:07.034522  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:07.034529  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:07.034534  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:07.037447  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:07.530264  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:07.530979  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:07.530994  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:07.531001  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:07.534515  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:07.535314  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:07.535331  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:07.535341  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:07.535347  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:07.538490  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:08.029495  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:08.029519  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:08.029527  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:08.029532  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:08.032595  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:08.033648  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:08.033663  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:08.033670  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:08.033674  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:08.036222  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:08.530270  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:08.530300  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:08.530308  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:08.530312  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:08.533945  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:08.534577  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:08.534595  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:08.534602  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:08.534607  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:08.537359  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:09.030234  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:09.030261  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:09.030272  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:09.030278  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:09.033609  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:09.034281  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:09.034295  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:09.034302  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:09.034306  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:09.038850  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:03:09.039449  130544 pod_ready.go:102] pod "etcd-ha-064080-m03" in "kube-system" namespace has status "Ready":"False"
	I0617 11:03:09.529661  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:09.529685  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:09.529696  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:09.529702  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:09.533224  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:09.534098  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:09.534118  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:09.534130  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:09.534138  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:09.536844  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:10.029849  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:10.029874  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:10.029882  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:10.029885  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:10.034785  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:03:10.035846  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:10.035866  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:10.035877  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:10.035884  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:10.041125  130544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0617 11:03:10.529908  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:10.529934  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:10.529942  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:10.529948  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:10.533432  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:10.534022  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:10.534038  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:10.534045  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:10.534049  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:10.537102  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:11.029476  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:11.029499  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.029508  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.029511  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.032960  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:11.033850  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:11.033866  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.033873  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.033878  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.037003  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:11.037525  130544 pod_ready.go:92] pod "etcd-ha-064080-m03" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:11.037543  130544 pod_ready.go:81] duration metric: took 13.008242382s for pod "etcd-ha-064080-m03" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.037560  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.037610  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-064080
	I0617 11:03:11.037618  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.037625  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.037630  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.040168  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:11.040649  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:03:11.040664  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.040670  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.040674  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.042899  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:11.043471  130544 pod_ready.go:92] pod "kube-apiserver-ha-064080" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:11.043493  130544 pod_ready.go:81] duration metric: took 5.925806ms for pod "kube-apiserver-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.043509  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.043582  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-064080-m02
	I0617 11:03:11.043598  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.043605  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.043609  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.046252  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:11.046790  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:03:11.046810  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.046820  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.046825  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.049907  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:11.050450  130544 pod_ready.go:92] pod "kube-apiserver-ha-064080-m02" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:11.050469  130544 pod_ready.go:81] duration metric: took 6.946564ms for pod "kube-apiserver-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.050481  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-064080-m03" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.050550  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-064080-m03
	I0617 11:03:11.050561  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.050570  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.050587  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.053362  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:11.053882  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:11.053896  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.053903  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.053906  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.055951  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:11.056469  130544 pod_ready.go:92] pod "kube-apiserver-ha-064080-m03" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:11.056488  130544 pod_ready.go:81] duration metric: took 5.999556ms for pod "kube-apiserver-ha-064080-m03" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.056499  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.056560  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-064080
	I0617 11:03:11.056570  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.056576  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.056579  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.058807  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:11.059285  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:03:11.059300  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.059310  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.059317  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.062249  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:11.062691  130544 pod_ready.go:92] pod "kube-controller-manager-ha-064080" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:11.062708  130544 pod_ready.go:81] duration metric: took 6.198978ms for pod "kube-controller-manager-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.062716  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.230137  130544 request.go:629] Waited for 167.33334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-064080-m02
	I0617 11:03:11.230243  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-064080-m02
	I0617 11:03:11.230252  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.230259  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.230264  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.233702  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:11.429819  130544 request.go:629] Waited for 195.374298ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:03:11.429900  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:03:11.429909  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.429922  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.429932  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.433247  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:11.433994  130544 pod_ready.go:92] pod "kube-controller-manager-ha-064080-m02" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:11.434012  130544 pod_ready.go:81] duration metric: took 371.280201ms for pod "kube-controller-manager-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.434027  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-064080-m03" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.630085  130544 request.go:629] Waited for 195.990584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-064080-m03
	I0617 11:03:11.630165  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-064080-m03
	I0617 11:03:11.630177  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.630188  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.630192  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.633910  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:11.830078  130544 request.go:629] Waited for 195.336696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:11.830245  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:11.830265  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.830274  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.830280  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.833253  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:11.833727  130544 pod_ready.go:92] pod "kube-controller-manager-ha-064080-m03" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:11.833745  130544 pod_ready.go:81] duration metric: took 399.711192ms for pod "kube-controller-manager-ha-064080-m03" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.833760  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dd48x" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:12.029735  130544 request.go:629] Waited for 195.885682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dd48x
	I0617 11:03:12.029820  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dd48x
	I0617 11:03:12.029826  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:12.029833  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:12.029838  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:12.033462  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:12.229471  130544 request.go:629] Waited for 195.320421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:03:12.229592  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:03:12.229607  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:12.229622  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:12.229627  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:12.233005  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:12.233612  130544 pod_ready.go:92] pod "kube-proxy-dd48x" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:12.233633  130544 pod_ready.go:81] duration metric: took 399.866858ms for pod "kube-proxy-dd48x" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:12.233642  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gsph4" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:12.429606  130544 request.go:629] Waited for 195.875153ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gsph4
	I0617 11:03:12.429698  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gsph4
	I0617 11:03:12.429720  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:12.429732  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:12.429744  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:12.433258  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:12.630263  130544 request.go:629] Waited for 196.294759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:12.630379  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:12.630392  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:12.630402  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:12.630411  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:12.633843  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:12.634564  130544 pod_ready.go:92] pod "kube-proxy-gsph4" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:12.634584  130544 pod_ready.go:81] duration metric: took 400.935712ms for pod "kube-proxy-gsph4" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:12.634594  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l55dg" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:12.829973  130544 request.go:629] Waited for 195.299876ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l55dg
	I0617 11:03:12.830058  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l55dg
	I0617 11:03:12.830069  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:12.830079  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:12.830086  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:12.835375  130544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0617 11:03:13.030096  130544 request.go:629] Waited for 193.378159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:03:13.030154  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:03:13.030159  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:13.030172  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:13.030180  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:13.033911  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:13.034559  130544 pod_ready.go:92] pod "kube-proxy-l55dg" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:13.034580  130544 pod_ready.go:81] duration metric: took 399.971993ms for pod "kube-proxy-l55dg" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:13.034594  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:13.229748  130544 request.go:629] Waited for 195.082264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-064080
	I0617 11:03:13.229832  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-064080
	I0617 11:03:13.229841  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:13.229848  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:13.229856  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:13.233062  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:13.430214  130544 request.go:629] Waited for 196.300524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:03:13.430308  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:03:13.430320  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:13.430332  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:13.430342  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:13.434438  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:03:13.435749  130544 pod_ready.go:92] pod "kube-scheduler-ha-064080" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:13.435780  130544 pod_ready.go:81] duration metric: took 401.178173ms for pod "kube-scheduler-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:13.435792  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:13.629874  130544 request.go:629] Waited for 193.97052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-064080-m02
	I0617 11:03:13.629941  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-064080-m02
	I0617 11:03:13.629946  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:13.629954  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:13.629959  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:13.633875  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:13.830050  130544 request.go:629] Waited for 195.38029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:03:13.830130  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:03:13.830136  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:13.830143  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:13.830149  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:13.833452  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:13.834027  130544 pod_ready.go:92] pod "kube-scheduler-ha-064080-m02" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:13.834046  130544 pod_ready.go:81] duration metric: took 398.247321ms for pod "kube-scheduler-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:13.834055  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-064080-m03" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:14.030151  130544 request.go:629] Waited for 196.001537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-064080-m03
	I0617 11:03:14.030214  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-064080-m03
	I0617 11:03:14.030220  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:14.030227  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:14.030231  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:14.033564  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:14.229710  130544 request.go:629] Waited for 195.337834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:14.229776  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:14.229783  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:14.229792  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:14.229799  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:14.232943  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:14.233953  130544 pod_ready.go:92] pod "kube-scheduler-ha-064080-m03" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:14.233977  130544 pod_ready.go:81] duration metric: took 399.914748ms for pod "kube-scheduler-ha-064080-m03" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:14.233992  130544 pod_ready.go:38] duration metric: took 16.252791367s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 11:03:14.234013  130544 api_server.go:52] waiting for apiserver process to appear ...
	I0617 11:03:14.234081  130544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:03:14.249706  130544 api_server.go:72] duration metric: took 19.628325256s to wait for apiserver process to appear ...
	I0617 11:03:14.249730  130544 api_server.go:88] waiting for apiserver healthz status ...
	I0617 11:03:14.249748  130544 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0617 11:03:14.254222  130544 api_server.go:279] https://192.168.39.134:8443/healthz returned 200:
	ok
	I0617 11:03:14.254277  130544 round_trippers.go:463] GET https://192.168.39.134:8443/version
	I0617 11:03:14.254285  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:14.254292  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:14.254295  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:14.255440  130544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0617 11:03:14.255530  130544 api_server.go:141] control plane version: v1.30.1
	I0617 11:03:14.255547  130544 api_server.go:131] duration metric: took 5.810118ms to wait for apiserver health ...
	I0617 11:03:14.255553  130544 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 11:03:14.429974  130544 request.go:629] Waited for 174.330557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:03:14.430051  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:03:14.430058  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:14.430070  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:14.430076  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:14.438031  130544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0617 11:03:14.448492  130544 system_pods.go:59] 24 kube-system pods found
	I0617 11:03:14.448519  130544 system_pods.go:61] "coredns-7db6d8ff4d-xbhnm" [be37a6ec-2a49-4a56-b8a3-0da865edb05d] Running
	I0617 11:03:14.448524  130544 system_pods.go:61] "coredns-7db6d8ff4d-zv99k" [c2453fd4-894d-4212-bc48-1803e28ddba8] Running
	I0617 11:03:14.448528  130544 system_pods.go:61] "etcd-ha-064080" [f7a1e80e-8ebc-496b-8919-ebf99a8dd4b4] Running
	I0617 11:03:14.448531  130544 system_pods.go:61] "etcd-ha-064080-m02" [7de6c88f-a0b9-4fa3-b4aa-e964191aa4e5] Running
	I0617 11:03:14.448535  130544 system_pods.go:61] "etcd-ha-064080-m03" [228b9fe2-a269-42b7-8c5e-09fdd0ff9b3a] Running
	I0617 11:03:14.448539  130544 system_pods.go:61] "kindnet-48mb7" [67422049-6637-4ca3-8bd1-2b47a265829d] Running
	I0617 11:03:14.448542  130544 system_pods.go:61] "kindnet-5mg7w" [0d4c6fae-77e8-4e1a-b96f-166696984275] Running
	I0617 11:03:14.448545  130544 system_pods.go:61] "kindnet-7cqp4" [f4671f39-ca07-4520-bc35-dce8e53318de] Running
	I0617 11:03:14.448548  130544 system_pods.go:61] "kube-apiserver-ha-064080" [fd326be1-2b78-41e8-9b57-138ffdadac71] Running
	I0617 11:03:14.448552  130544 system_pods.go:61] "kube-apiserver-ha-064080-m02" [74164e88-591d-490e-b4f9-1d8ea635cd2d] Running
	I0617 11:03:14.448555  130544 system_pods.go:61] "kube-apiserver-ha-064080-m03" [8d441ecd-ed28-42b3-a5fc-38b9f8acd9fe] Running
	I0617 11:03:14.448558  130544 system_pods.go:61] "kube-controller-manager-ha-064080" [142a6154-fcbf-4d5d-a222-21d1b46720cb] Running
	I0617 11:03:14.448561  130544 system_pods.go:61] "kube-controller-manager-ha-064080-m02" [f096dd77-2f79-479e-bd06-b02c942200c6] Running
	I0617 11:03:14.448564  130544 system_pods.go:61] "kube-controller-manager-ha-064080-m03" [e3289fce-4b45-4c3d-b826-628d6951e78c] Running
	I0617 11:03:14.448567  130544 system_pods.go:61] "kube-proxy-dd48x" [e1bd1d47-a8a5-47a5-820c-dd86f7ea7765] Running
	I0617 11:03:14.448570  130544 system_pods.go:61] "kube-proxy-gsph4" [541b12cf-3e15-45e1-8c97-0c28e8b17e2a] Running
	I0617 11:03:14.448573  130544 system_pods.go:61] "kube-proxy-l55dg" [1d827d6c-0432-4162-924c-d43b66b08c26] Running
	I0617 11:03:14.448576  130544 system_pods.go:61] "kube-scheduler-ha-064080" [f9e62714-7ec7-47a9-ab16-6afada18c6d8] Running
	I0617 11:03:14.448580  130544 system_pods.go:61] "kube-scheduler-ha-064080-m02" [ec804903-8a64-4a3d-8843-9d2ec21d7158] Running
	I0617 11:03:14.448583  130544 system_pods.go:61] "kube-scheduler-ha-064080-m03" [e33dbdc2-c3b4-489d-8fe0-e458da065d42] Running
	I0617 11:03:14.448586  130544 system_pods.go:61] "kube-vip-ha-064080" [6b9259b1-ee46-4493-ba10-dcb32da03f57] Running
	I0617 11:03:14.448589  130544 system_pods.go:61] "kube-vip-ha-064080-m02" [8a4ad095-97bf-4a1f-8579-9e6a564f24ed] Running
	I0617 11:03:14.448592  130544 system_pods.go:61] "kube-vip-ha-064080-m03" [a6754167-2759-44c2-bdb6-2fe9d8b601fd] Running
	I0617 11:03:14.448595  130544 system_pods.go:61] "storage-provisioner" [5646fca8-9ebc-47c1-b5ff-c87b0ed800d8] Running
	I0617 11:03:14.448601  130544 system_pods.go:74] duration metric: took 193.042133ms to wait for pod list to return data ...
	I0617 11:03:14.448610  130544 default_sa.go:34] waiting for default service account to be created ...
	I0617 11:03:14.629501  130544 request.go:629] Waited for 180.813341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/default/serviceaccounts
	I0617 11:03:14.629566  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/default/serviceaccounts
	I0617 11:03:14.629571  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:14.629578  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:14.629583  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:14.632904  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:14.633034  130544 default_sa.go:45] found service account: "default"
	I0617 11:03:14.633051  130544 default_sa.go:55] duration metric: took 184.434282ms for default service account to be created ...
	I0617 11:03:14.633062  130544 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 11:03:14.830413  130544 request.go:629] Waited for 197.271917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:03:14.830474  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:03:14.830480  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:14.830488  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:14.830492  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:14.837111  130544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0617 11:03:14.844000  130544 system_pods.go:86] 24 kube-system pods found
	I0617 11:03:14.844025  130544 system_pods.go:89] "coredns-7db6d8ff4d-xbhnm" [be37a6ec-2a49-4a56-b8a3-0da865edb05d] Running
	I0617 11:03:14.844030  130544 system_pods.go:89] "coredns-7db6d8ff4d-zv99k" [c2453fd4-894d-4212-bc48-1803e28ddba8] Running
	I0617 11:03:14.844034  130544 system_pods.go:89] "etcd-ha-064080" [f7a1e80e-8ebc-496b-8919-ebf99a8dd4b4] Running
	I0617 11:03:14.844038  130544 system_pods.go:89] "etcd-ha-064080-m02" [7de6c88f-a0b9-4fa3-b4aa-e964191aa4e5] Running
	I0617 11:03:14.844042  130544 system_pods.go:89] "etcd-ha-064080-m03" [228b9fe2-a269-42b7-8c5e-09fdd0ff9b3a] Running
	I0617 11:03:14.844047  130544 system_pods.go:89] "kindnet-48mb7" [67422049-6637-4ca3-8bd1-2b47a265829d] Running
	I0617 11:03:14.844051  130544 system_pods.go:89] "kindnet-5mg7w" [0d4c6fae-77e8-4e1a-b96f-166696984275] Running
	I0617 11:03:14.844055  130544 system_pods.go:89] "kindnet-7cqp4" [f4671f39-ca07-4520-bc35-dce8e53318de] Running
	I0617 11:03:14.844059  130544 system_pods.go:89] "kube-apiserver-ha-064080" [fd326be1-2b78-41e8-9b57-138ffdadac71] Running
	I0617 11:03:14.844063  130544 system_pods.go:89] "kube-apiserver-ha-064080-m02" [74164e88-591d-490e-b4f9-1d8ea635cd2d] Running
	I0617 11:03:14.844067  130544 system_pods.go:89] "kube-apiserver-ha-064080-m03" [8d441ecd-ed28-42b3-a5fc-38b9f8acd9fe] Running
	I0617 11:03:14.844073  130544 system_pods.go:89] "kube-controller-manager-ha-064080" [142a6154-fcbf-4d5d-a222-21d1b46720cb] Running
	I0617 11:03:14.844081  130544 system_pods.go:89] "kube-controller-manager-ha-064080-m02" [f096dd77-2f79-479e-bd06-b02c942200c6] Running
	I0617 11:03:14.844086  130544 system_pods.go:89] "kube-controller-manager-ha-064080-m03" [e3289fce-4b45-4c3d-b826-628d6951e78c] Running
	I0617 11:03:14.844090  130544 system_pods.go:89] "kube-proxy-dd48x" [e1bd1d47-a8a5-47a5-820c-dd86f7ea7765] Running
	I0617 11:03:14.844094  130544 system_pods.go:89] "kube-proxy-gsph4" [541b12cf-3e15-45e1-8c97-0c28e8b17e2a] Running
	I0617 11:03:14.844102  130544 system_pods.go:89] "kube-proxy-l55dg" [1d827d6c-0432-4162-924c-d43b66b08c26] Running
	I0617 11:03:14.844106  130544 system_pods.go:89] "kube-scheduler-ha-064080" [f9e62714-7ec7-47a9-ab16-6afada18c6d8] Running
	I0617 11:03:14.844112  130544 system_pods.go:89] "kube-scheduler-ha-064080-m02" [ec804903-8a64-4a3d-8843-9d2ec21d7158] Running
	I0617 11:03:14.844116  130544 system_pods.go:89] "kube-scheduler-ha-064080-m03" [e33dbdc2-c3b4-489d-8fe0-e458da065d42] Running
	I0617 11:03:14.844122  130544 system_pods.go:89] "kube-vip-ha-064080" [6b9259b1-ee46-4493-ba10-dcb32da03f57] Running
	I0617 11:03:14.844125  130544 system_pods.go:89] "kube-vip-ha-064080-m02" [8a4ad095-97bf-4a1f-8579-9e6a564f24ed] Running
	I0617 11:03:14.844130  130544 system_pods.go:89] "kube-vip-ha-064080-m03" [a6754167-2759-44c2-bdb6-2fe9d8b601fd] Running
	I0617 11:03:14.844134  130544 system_pods.go:89] "storage-provisioner" [5646fca8-9ebc-47c1-b5ff-c87b0ed800d8] Running
	I0617 11:03:14.844143  130544 system_pods.go:126] duration metric: took 211.071081ms to wait for k8s-apps to be running ...
	I0617 11:03:14.844150  130544 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 11:03:14.844195  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:03:14.860938  130544 system_svc.go:56] duration metric: took 16.775634ms WaitForService to wait for kubelet
	I0617 11:03:14.860973  130544 kubeadm.go:576] duration metric: took 20.239595677s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:03:14.860999  130544 node_conditions.go:102] verifying NodePressure condition ...
	I0617 11:03:15.030462  130544 request.go:629] Waited for 169.336616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes
	I0617 11:03:15.030529  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes
	I0617 11:03:15.030541  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:15.030552  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:15.030563  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:15.033962  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:15.035161  130544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 11:03:15.035183  130544 node_conditions.go:123] node cpu capacity is 2
	I0617 11:03:15.035200  130544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 11:03:15.035206  130544 node_conditions.go:123] node cpu capacity is 2
	I0617 11:03:15.035212  130544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 11:03:15.035221  130544 node_conditions.go:123] node cpu capacity is 2
	I0617 11:03:15.035227  130544 node_conditions.go:105] duration metric: took 174.222144ms to run NodePressure ...
	I0617 11:03:15.035245  130544 start.go:240] waiting for startup goroutines ...
	I0617 11:03:15.035270  130544 start.go:254] writing updated cluster config ...
	I0617 11:03:15.035660  130544 ssh_runner.go:195] Run: rm -f paused
	I0617 11:03:15.086530  130544 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 11:03:15.088850  130544 out.go:177] * Done! kubectl is now configured to use "ha-064080" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.118763041Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718622399118742215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2d3dc17-fc4e-4a73-9e5c-9fee44ac003f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.119435785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72298bcb-620a-40fa-99c8-f8708c228fde name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.119502523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72298bcb-620a-40fa-99c8-f8708c228fde name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.119724179Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a562b9195d78591133b90abc121faa5dbf34feac5066f4f821669a5b8c27e85,PodSandboxId:32924073f320b5367b28757d06fe232b7af64ccf6539c044b32541c03c8b9cc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718622197449697447,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kubernetes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328,PodSandboxId:20be829b9ffef66a57eb936abd30f0a0daa6277806fc399919edde5c9193aa94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622049377736889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c,PodSandboxId:54a9c95a1ef70b178265a9c78e9dbcddfb9f8cb7ddc312e0e324a4f449b6ebc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622049372976570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9fa67df5a3f15517f0cc5493139c9ec692bbadbef748f1315698a8ae05601f,PodSandboxId:f9df57723b165a731e239a6ef5aa2bc8caad54a36061dfb7afcd1021c1962f8b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1718622049320124723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be33376c9348ffc6f1e2f31be21508d4aa16ebb1729b2780dabed95ba3ec9bbc,PodSandboxId:f67453c7d28830b38751fef3fd549d9fc1c2196b59ab402fdb76c2baae9174af,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718622047566527858,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb,PodSandboxId:78661140f722ccccbbef01859ed0a403a118690cd55dd92f4d2cf08d1c03af3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171862204
5688267141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24495c319c5c94afe6d0b59a3e9bc367b4539472e5846002db4fc1b802fac288,PodSandboxId:502e1e8fec2b89c90310f59069521c2fdde5e165e725bec6e1cbab4ef89951dd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17186220280
13380615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ffa31c75020c2c61ed38418bc6b9660,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf5516bbfc1d7ca0c4a0ebc2026888f4c7754891f8a6cfa30b49ea80c4c6a1b,PodSandboxId:5f8d58d694025bb9c7d62e4497344e57a4f85fbaaacc72882f259fd69bf8b688,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718622025755316625,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be01152b9ab18f70b88322e4262f33d332dd8aa951d6262c8ac130261de6479d,PodSandboxId:4b79ce1b27f110ccadaad87cef79c43a9db99fbaa28089b3617bf2d74bb5b811,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718622025707947555,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kubernetes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad,PodSandboxId:7293d250b3e0dd840434d7afd153d17ac7842ec4f356edd9bac3f40f6603de1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718622025699829962,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[string]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328,PodSandboxId:cb4974ce47c357bdbcfd6dd322289bd64cf2cbb3c4a7ad3e2ee523444ebfc04e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718622025592353826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72298bcb-620a-40fa-99c8-f8708c228fde name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.162268968Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d1b8d0e3-be31-4d78-a557-6fedf5d30b04 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.162337934Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d1b8d0e3-be31-4d78-a557-6fedf5d30b04 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.163613150Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=679a83bb-50a0-4449-a110-1689f6b45beb name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.164152344Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718622399164128660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=679a83bb-50a0-4449-a110-1689f6b45beb name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.164715655Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94654bad-e800-4384-bd6c-8a9d7a0b2244 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.164770433Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94654bad-e800-4384-bd6c-8a9d7a0b2244 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.165097545Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a562b9195d78591133b90abc121faa5dbf34feac5066f4f821669a5b8c27e85,PodSandboxId:32924073f320b5367b28757d06fe232b7af64ccf6539c044b32541c03c8b9cc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718622197449697447,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kubernetes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328,PodSandboxId:20be829b9ffef66a57eb936abd30f0a0daa6277806fc399919edde5c9193aa94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622049377736889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c,PodSandboxId:54a9c95a1ef70b178265a9c78e9dbcddfb9f8cb7ddc312e0e324a4f449b6ebc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622049372976570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9fa67df5a3f15517f0cc5493139c9ec692bbadbef748f1315698a8ae05601f,PodSandboxId:f9df57723b165a731e239a6ef5aa2bc8caad54a36061dfb7afcd1021c1962f8b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1718622049320124723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be33376c9348ffc6f1e2f31be21508d4aa16ebb1729b2780dabed95ba3ec9bbc,PodSandboxId:f67453c7d28830b38751fef3fd549d9fc1c2196b59ab402fdb76c2baae9174af,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718622047566527858,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb,PodSandboxId:78661140f722ccccbbef01859ed0a403a118690cd55dd92f4d2cf08d1c03af3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171862204
5688267141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24495c319c5c94afe6d0b59a3e9bc367b4539472e5846002db4fc1b802fac288,PodSandboxId:502e1e8fec2b89c90310f59069521c2fdde5e165e725bec6e1cbab4ef89951dd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17186220280
13380615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ffa31c75020c2c61ed38418bc6b9660,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf5516bbfc1d7ca0c4a0ebc2026888f4c7754891f8a6cfa30b49ea80c4c6a1b,PodSandboxId:5f8d58d694025bb9c7d62e4497344e57a4f85fbaaacc72882f259fd69bf8b688,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718622025755316625,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be01152b9ab18f70b88322e4262f33d332dd8aa951d6262c8ac130261de6479d,PodSandboxId:4b79ce1b27f110ccadaad87cef79c43a9db99fbaa28089b3617bf2d74bb5b811,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718622025707947555,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kubernetes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad,PodSandboxId:7293d250b3e0dd840434d7afd153d17ac7842ec4f356edd9bac3f40f6603de1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718622025699829962,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[string]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328,PodSandboxId:cb4974ce47c357bdbcfd6dd322289bd64cf2cbb3c4a7ad3e2ee523444ebfc04e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718622025592353826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94654bad-e800-4384-bd6c-8a9d7a0b2244 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.202998143Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ec2c4d95-ec94-4e80-b96f-e3b1a7b1aed6 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.203081766Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ec2c4d95-ec94-4e80-b96f-e3b1a7b1aed6 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.204174066Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e80e6e9-495d-4bf8-81f5-a4ca83661fd4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.204651474Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718622399204630494,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e80e6e9-495d-4bf8-81f5-a4ca83661fd4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.205327607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a29f689-1b27-44e3-bc93-751a8b248234 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.205381175Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a29f689-1b27-44e3-bc93-751a8b248234 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.205609087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a562b9195d78591133b90abc121faa5dbf34feac5066f4f821669a5b8c27e85,PodSandboxId:32924073f320b5367b28757d06fe232b7af64ccf6539c044b32541c03c8b9cc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718622197449697447,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kubernetes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328,PodSandboxId:20be829b9ffef66a57eb936abd30f0a0daa6277806fc399919edde5c9193aa94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622049377736889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c,PodSandboxId:54a9c95a1ef70b178265a9c78e9dbcddfb9f8cb7ddc312e0e324a4f449b6ebc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622049372976570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9fa67df5a3f15517f0cc5493139c9ec692bbadbef748f1315698a8ae05601f,PodSandboxId:f9df57723b165a731e239a6ef5aa2bc8caad54a36061dfb7afcd1021c1962f8b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1718622049320124723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be33376c9348ffc6f1e2f31be21508d4aa16ebb1729b2780dabed95ba3ec9bbc,PodSandboxId:f67453c7d28830b38751fef3fd549d9fc1c2196b59ab402fdb76c2baae9174af,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718622047566527858,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb,PodSandboxId:78661140f722ccccbbef01859ed0a403a118690cd55dd92f4d2cf08d1c03af3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171862204
5688267141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24495c319c5c94afe6d0b59a3e9bc367b4539472e5846002db4fc1b802fac288,PodSandboxId:502e1e8fec2b89c90310f59069521c2fdde5e165e725bec6e1cbab4ef89951dd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17186220280
13380615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ffa31c75020c2c61ed38418bc6b9660,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf5516bbfc1d7ca0c4a0ebc2026888f4c7754891f8a6cfa30b49ea80c4c6a1b,PodSandboxId:5f8d58d694025bb9c7d62e4497344e57a4f85fbaaacc72882f259fd69bf8b688,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718622025755316625,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be01152b9ab18f70b88322e4262f33d332dd8aa951d6262c8ac130261de6479d,PodSandboxId:4b79ce1b27f110ccadaad87cef79c43a9db99fbaa28089b3617bf2d74bb5b811,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718622025707947555,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kubernetes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad,PodSandboxId:7293d250b3e0dd840434d7afd153d17ac7842ec4f356edd9bac3f40f6603de1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718622025699829962,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[string]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328,PodSandboxId:cb4974ce47c357bdbcfd6dd322289bd64cf2cbb3c4a7ad3e2ee523444ebfc04e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718622025592353826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a29f689-1b27-44e3-bc93-751a8b248234 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.245183303Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2ccd804f-f717-42d2-82e9-d800c1e4d589 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.245252057Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2ccd804f-f717-42d2-82e9-d800c1e4d589 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.246394838Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=400e52eb-8d5c-459b-b28c-12a8eda3753c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.246932783Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718622399246908144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=400e52eb-8d5c-459b-b28c-12a8eda3753c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.247488833Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb75eb5a-d401-47e5-8ded-3ce803534672 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.247557975Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb75eb5a-d401-47e5-8ded-3ce803534672 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:06:39 ha-064080 crio[680]: time="2024-06-17 11:06:39.247809559Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a562b9195d78591133b90abc121faa5dbf34feac5066f4f821669a5b8c27e85,PodSandboxId:32924073f320b5367b28757d06fe232b7af64ccf6539c044b32541c03c8b9cc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718622197449697447,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kubernetes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328,PodSandboxId:20be829b9ffef66a57eb936abd30f0a0daa6277806fc399919edde5c9193aa94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622049377736889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c,PodSandboxId:54a9c95a1ef70b178265a9c78e9dbcddfb9f8cb7ddc312e0e324a4f449b6ebc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622049372976570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9fa67df5a3f15517f0cc5493139c9ec692bbadbef748f1315698a8ae05601f,PodSandboxId:f9df57723b165a731e239a6ef5aa2bc8caad54a36061dfb7afcd1021c1962f8b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1718622049320124723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be33376c9348ffc6f1e2f31be21508d4aa16ebb1729b2780dabed95ba3ec9bbc,PodSandboxId:f67453c7d28830b38751fef3fd549d9fc1c2196b59ab402fdb76c2baae9174af,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718622047566527858,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb,PodSandboxId:78661140f722ccccbbef01859ed0a403a118690cd55dd92f4d2cf08d1c03af3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171862204
5688267141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24495c319c5c94afe6d0b59a3e9bc367b4539472e5846002db4fc1b802fac288,PodSandboxId:502e1e8fec2b89c90310f59069521c2fdde5e165e725bec6e1cbab4ef89951dd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17186220280
13380615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ffa31c75020c2c61ed38418bc6b9660,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf5516bbfc1d7ca0c4a0ebc2026888f4c7754891f8a6cfa30b49ea80c4c6a1b,PodSandboxId:5f8d58d694025bb9c7d62e4497344e57a4f85fbaaacc72882f259fd69bf8b688,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718622025755316625,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be01152b9ab18f70b88322e4262f33d332dd8aa951d6262c8ac130261de6479d,PodSandboxId:4b79ce1b27f110ccadaad87cef79c43a9db99fbaa28089b3617bf2d74bb5b811,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718622025707947555,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kubernetes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad,PodSandboxId:7293d250b3e0dd840434d7afd153d17ac7842ec4f356edd9bac3f40f6603de1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718622025699829962,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[string]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328,PodSandboxId:cb4974ce47c357bdbcfd6dd322289bd64cf2cbb3c4a7ad3e2ee523444ebfc04e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718622025592353826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb75eb5a-d401-47e5-8ded-3ce803534672 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1a562b9195d78       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   32924073f320b       busybox-fc5497c4f-89r9v
	c3628888540ea       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   20be829b9ffef       coredns-7db6d8ff4d-xbhnm
	10061c1b3dd4f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   54a9c95a1ef70       coredns-7db6d8ff4d-zv99k
	bb9fa67df5a3f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   f9df57723b165       storage-provisioner
	be33376c9348f       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    5 minutes ago       Running             kindnet-cni               0                   f67453c7d2883       kindnet-48mb7
	8852bc2fd7b61       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      5 minutes ago       Running             kube-proxy                0                   78661140f722c       kube-proxy-dd48x
	24495c319c5c9       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   502e1e8fec2b8       kube-vip-ha-064080
	ddf5516bbfc1d       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      6 minutes ago       Running             kube-controller-manager   0                   5f8d58d694025       kube-controller-manager-ha-064080
	be01152b9ab18       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      6 minutes ago       Running             kube-apiserver            0                   4b79ce1b27f11       kube-apiserver-ha-064080
	ecbb08a618aa7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   7293d250b3e0d       etcd-ha-064080
	60cc5a9cf6621       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      6 minutes ago       Running             kube-scheduler            0                   cb4974ce47c35       kube-scheduler-ha-064080
	
	
	==> coredns [10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c] <==
	[INFO] 10.244.1.2:54092 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000245278s
	[INFO] 10.244.1.2:44037 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000591531s
	[INFO] 10.244.1.2:60098 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001588047s
	[INFO] 10.244.1.2:43747 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095343s
	[INFO] 10.244.1.2:43363 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000301914s
	[INFO] 10.244.1.2:47475 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117378s
	[INFO] 10.244.2.2:50417 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002227444s
	[INFO] 10.244.2.2:60625 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001284466s
	[INFO] 10.244.2.2:49631 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063512s
	[INFO] 10.244.2.2:60462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075059s
	[INFO] 10.244.2.2:55188 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061001s
	[INFO] 10.244.0.4:44285 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114934s
	[INFO] 10.244.0.4:41654 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082437s
	[INFO] 10.244.1.2:41564 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167707s
	[INFO] 10.244.1.2:48527 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199996s
	[INFO] 10.244.1.2:54645 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101253s
	[INFO] 10.244.1.2:46137 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161774s
	[INFO] 10.244.2.2:47749 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123256s
	[INFO] 10.244.2.2:44797 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155611s
	[INFO] 10.244.0.4:57514 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013406s
	[INFO] 10.244.1.2:57226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001349s
	[INFO] 10.244.1.2:38456 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000150623s
	[INFO] 10.244.1.2:34565 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000206574s
	[INFO] 10.244.2.2:55350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181312s
	[INFO] 10.244.2.2:54665 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000284418s
	
	
	==> coredns [c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328] <==
	[INFO] 10.244.1.2:57521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000455867s
	[INFO] 10.244.1.2:34642 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001672898s
	[INFO] 10.244.1.2:55414 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000439082s
	[INFO] 10.244.1.2:35407 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001712046s
	[INFO] 10.244.2.2:35032 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000113662s
	[INFO] 10.244.2.2:41388 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000113624s
	[INFO] 10.244.0.4:54403 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009057413s
	[INFO] 10.244.0.4:55736 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00029139s
	[INFO] 10.244.0.4:56993 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168668s
	[INFO] 10.244.0.4:54854 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168204s
	[INFO] 10.244.1.2:39920 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000461115s
	[INFO] 10.244.1.2:59121 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005103552s
	[INFO] 10.244.2.2:33690 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260726s
	[INFO] 10.244.2.2:40819 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103621s
	[INFO] 10.244.2.2:47624 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000173244s
	[INFO] 10.244.0.4:45570 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101008s
	[INFO] 10.244.0.4:38238 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096216s
	[INFO] 10.244.2.2:47491 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144426s
	[INFO] 10.244.2.2:57595 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010924s
	[INFO] 10.244.0.4:37645 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011472s
	[INFO] 10.244.0.4:40937 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000173334s
	[INFO] 10.244.0.4:38240 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010406s
	[INFO] 10.244.1.2:51662 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000104731s
	[INFO] 10.244.2.2:33365 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139748s
	[INFO] 10.244.2.2:44022 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000178435s
	
	
	==> describe nodes <==
	Name:               ha-064080
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-064080
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=ha-064080
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T11_00_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:00:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-064080
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:06:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:03:35 +0000   Mon, 17 Jun 2024 11:00:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:03:35 +0000   Mon, 17 Jun 2024 11:00:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:03:35 +0000   Mon, 17 Jun 2024 11:00:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:03:35 +0000   Mon, 17 Jun 2024 11:00:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    ha-064080
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f526834e1094a1798c2f7e5de014d6a
	  System UUID:                6f526834-e109-4a17-98c2-f7e5de014d6a
	  Boot ID:                    7c18f343-1055-464d-948c-cec47020ebb1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-89r9v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  kube-system                 coredns-7db6d8ff4d-xbhnm             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m54s
	  kube-system                 coredns-7db6d8ff4d-zv99k             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m54s
	  kube-system                 etcd-ha-064080                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m7s
	  kube-system                 kindnet-48mb7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m54s
	  kube-system                 kube-apiserver-ha-064080             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	  kube-system                 kube-controller-manager-ha-064080    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	  kube-system                 kube-proxy-dd48x                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  kube-system                 kube-scheduler-ha-064080             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	  kube-system                 kube-vip-ha-064080                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m53s  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m15s  kubelet          Node ha-064080 status is now: NodeHasSufficientMemory
	  Normal  Starting                 6m7s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m7s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m7s   kubelet          Node ha-064080 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m7s   kubelet          Node ha-064080 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m7s   kubelet          Node ha-064080 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m55s  node-controller  Node ha-064080 event: Registered Node ha-064080 in Controller
	  Normal  NodeReady                5m51s  kubelet          Node ha-064080 status is now: NodeReady
	  Normal  RegisteredNode           4m39s  node-controller  Node ha-064080 event: Registered Node ha-064080 in Controller
	  Normal  RegisteredNode           3m30s  node-controller  Node ha-064080 event: Registered Node ha-064080 in Controller
	
	
	Name:               ha-064080-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-064080-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=ha-064080
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_17T11_01_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:01:41 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-064080-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:04:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 17 Jun 2024 11:03:43 +0000   Mon, 17 Jun 2024 11:04:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 17 Jun 2024 11:03:43 +0000   Mon, 17 Jun 2024 11:04:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 17 Jun 2024 11:03:43 +0000   Mon, 17 Jun 2024 11:04:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 17 Jun 2024 11:03:43 +0000   Mon, 17 Jun 2024 11:04:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-064080-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d22246006bf04dab820bccd210120c30
	  System UUID:                d2224600-6bf0-4dab-820b-ccd210120c30
	  Boot ID:                    096ef5df-247b-409d-8b96-8b6e8fade952
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gf9j7                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  kube-system                 etcd-ha-064080-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m55s
	  kube-system                 kindnet-7cqp4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m58s
	  kube-system                 kube-apiserver-ha-064080-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-controller-manager-ha-064080-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-proxy-l55dg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-scheduler-ha-064080-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-vip-ha-064080-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m58s (x8 over 4m58s)  kubelet          Node ha-064080-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m58s (x8 over 4m58s)  kubelet          Node ha-064080-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m58s (x7 over 4m58s)  kubelet          Node ha-064080-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m55s                  node-controller  Node ha-064080-m02 event: Registered Node ha-064080-m02 in Controller
	  Normal  RegisteredNode           4m39s                  node-controller  Node ha-064080-m02 event: Registered Node ha-064080-m02 in Controller
	  Normal  RegisteredNode           3m30s                  node-controller  Node ha-064080-m02 event: Registered Node ha-064080-m02 in Controller
	  Normal  NodeNotReady             100s                   node-controller  Node ha-064080-m02 status is now: NodeNotReady
	
	
	Name:               ha-064080-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-064080-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=ha-064080
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_17T11_02_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:02:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-064080-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:06:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:03:20 +0000   Mon, 17 Jun 2024 11:02:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:03:20 +0000   Mon, 17 Jun 2024 11:02:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:03:20 +0000   Mon, 17 Jun 2024 11:02:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:03:20 +0000   Mon, 17 Jun 2024 11:02:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.168
	  Hostname:    ha-064080-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 28a9e43ded0d41f5b6e29c37565b7ecd
	  System UUID:                28a9e43d-ed0d-41f5-b6e2-9c37565b7ecd
	  Boot ID:                    5bc25cc5-bd20-436d-b597-815c4183fd44
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wbcxx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  kube-system                 etcd-ha-064080-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m47s
	  kube-system                 kindnet-5mg7w                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m49s
	  kube-system                 kube-apiserver-ha-064080-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-controller-manager-ha-064080-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-proxy-gsph4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-scheduler-ha-064080-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-vip-ha-064080-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m43s                  kube-proxy       
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-064080-m03 event: Registered Node ha-064080-m03 in Controller
	  Normal  Starting                 3m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s (x8 over 3m49s)  kubelet          Node ha-064080-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s (x8 over 3m49s)  kubelet          Node ha-064080-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s (x7 over 3m49s)  kubelet          Node ha-064080-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m45s                  node-controller  Node ha-064080-m03 event: Registered Node ha-064080-m03 in Controller
	  Normal  RegisteredNode           3m30s                  node-controller  Node ha-064080-m03 event: Registered Node ha-064080-m03 in Controller
	
	
	Name:               ha-064080-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-064080-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=ha-064080
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_17T11_03_52_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:03:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-064080-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:06:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:04:22 +0000   Mon, 17 Jun 2024 11:03:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:04:22 +0000   Mon, 17 Jun 2024 11:03:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:04:22 +0000   Mon, 17 Jun 2024 11:03:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:04:22 +0000   Mon, 17 Jun 2024 11:03:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.167
	  Hostname:    ha-064080-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 33fd5c3b11ee44e78fa203be011bc171
	  System UUID:                33fd5c3b-11ee-44e7-8fa2-03be011bc171
	  Boot ID:                    2f4f3a16-ace8-4d6c-84fb-de9f87bd3bc9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pn664       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m48s
	  kube-system                 kube-proxy-7t8b9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m43s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-064080-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-064080-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-064080-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m45s                  node-controller  Node ha-064080-m04 event: Registered Node ha-064080-m04 in Controller
	  Normal  RegisteredNode           2m45s                  node-controller  Node ha-064080-m04 event: Registered Node ha-064080-m04 in Controller
	  Normal  RegisteredNode           2m44s                  node-controller  Node ha-064080-m04 event: Registered Node ha-064080-m04 in Controller
	  Normal  NodeReady                2m41s                  kubelet          Node ha-064080-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun17 10:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050897] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040396] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jun17 11:00] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.382779] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.620187] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.883696] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.059639] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052376] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.200032] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.124990] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.278932] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.114561] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.787636] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.060887] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.333258] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.080001] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.043226] kauditd_printk_skb: 18 callbacks suppressed
	[ +14.410422] kauditd_printk_skb: 72 callbacks suppressed
	
	
	==> etcd [ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad] <==
	{"level":"warn","ts":"2024-06-17T11:06:39.506996Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.514692Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.518587Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.535707Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.538186Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.544451Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.551549Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.556527Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.560734Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.568253Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.574206Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.580111Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.58405Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.58768Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.598971Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.606123Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.612797Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.616995Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.620024Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.637944Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.655009Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.656625Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.676095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.692282Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:06:39.737612Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:06:39 up 6 min,  0 users,  load average: 0.22, 0.30, 0.16
	Linux ha-064080 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [be33376c9348ffc6f1e2f31be21508d4aa16ebb1729b2780dabed95ba3ec9bbc] <==
	I0617 11:06:08.809240       1 main.go:250] Node ha-064080-m04 has CIDR [10.244.3.0/24] 
	I0617 11:06:18.815703       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0617 11:06:18.815744       1 main.go:227] handling current node
	I0617 11:06:18.815755       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0617 11:06:18.815760       1 main.go:250] Node ha-064080-m02 has CIDR [10.244.1.0/24] 
	I0617 11:06:18.815930       1 main.go:223] Handling node with IPs: map[192.168.39.168:{}]
	I0617 11:06:18.815956       1 main.go:250] Node ha-064080-m03 has CIDR [10.244.2.0/24] 
	I0617 11:06:18.816066       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I0617 11:06:18.816096       1 main.go:250] Node ha-064080-m04 has CIDR [10.244.3.0/24] 
	I0617 11:06:28.829034       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0617 11:06:28.829163       1 main.go:227] handling current node
	I0617 11:06:28.829198       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0617 11:06:28.829223       1 main.go:250] Node ha-064080-m02 has CIDR [10.244.1.0/24] 
	I0617 11:06:28.834374       1 main.go:223] Handling node with IPs: map[192.168.39.168:{}]
	I0617 11:06:28.836031       1 main.go:250] Node ha-064080-m03 has CIDR [10.244.2.0/24] 
	I0617 11:06:28.836500       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I0617 11:06:28.836602       1 main.go:250] Node ha-064080-m04 has CIDR [10.244.3.0/24] 
	I0617 11:06:38.849826       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0617 11:06:38.849932       1 main.go:227] handling current node
	I0617 11:06:38.849947       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0617 11:06:38.849955       1 main.go:250] Node ha-064080-m02 has CIDR [10.244.1.0/24] 
	I0617 11:06:38.850081       1 main.go:223] Handling node with IPs: map[192.168.39.168:{}]
	I0617 11:06:38.850114       1 main.go:250] Node ha-064080-m03 has CIDR [10.244.2.0/24] 
	I0617 11:06:38.850197       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I0617 11:06:38.850205       1 main.go:250] Node ha-064080-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [be01152b9ab18f70b88322e4262f33d332dd8aa951d6262c8ac130261de6479d] <==
	I0617 11:00:30.680027       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0617 11:00:30.814793       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0617 11:00:30.826487       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.134]
	I0617 11:00:30.828692       1 controller.go:615] quota admission added evaluator for: endpoints
	I0617 11:00:30.835116       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0617 11:00:31.034420       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0617 11:00:31.973596       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0617 11:00:31.991917       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0617 11:00:32.152369       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0617 11:00:45.085731       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0617 11:00:45.139164       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0617 11:03:18.972773       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36002: use of closed network connection
	E0617 11:03:19.169073       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36020: use of closed network connection
	E0617 11:03:19.362615       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36040: use of closed network connection
	E0617 11:03:19.563329       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36048: use of closed network connection
	E0617 11:03:19.757426       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36064: use of closed network connection
	E0617 11:03:19.945092       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36080: use of closed network connection
	E0617 11:03:20.145130       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36096: use of closed network connection
	E0617 11:03:20.337642       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36120: use of closed network connection
	E0617 11:03:20.533153       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36130: use of closed network connection
	E0617 11:03:21.051574       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36176: use of closed network connection
	E0617 11:03:21.224412       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36196: use of closed network connection
	E0617 11:03:21.398178       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36202: use of closed network connection
	E0617 11:03:21.582316       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36230: use of closed network connection
	E0617 11:03:21.761691       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36250: use of closed network connection
	
	
	==> kube-controller-manager [ddf5516bbfc1d7ca0c4a0ebc2026888f4c7754891f8a6cfa30b49ea80c4c6a1b] <==
	I0617 11:01:41.607775       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-064080-m02\" does not exist"
	I0617 11:01:41.618447       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-064080-m02" podCIDRs=["10.244.1.0/24"]
	I0617 11:01:44.524682       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-064080-m02"
	I0617 11:02:50.140273       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-064080-m03\" does not exist"
	I0617 11:02:50.167343       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-064080-m03" podCIDRs=["10.244.2.0/24"]
	I0617 11:02:54.904612       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-064080-m03"
	I0617 11:03:15.989614       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.828346ms"
	I0617 11:03:16.080792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.016669ms"
	I0617 11:03:16.370445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="289.352101ms"
	E0617 11:03:16.370493       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0617 11:03:16.455715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.172164ms"
	I0617 11:03:16.455828       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.806µs"
	I0617 11:03:17.906035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.602032ms"
	I0617 11:03:17.906330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.937µs"
	I0617 11:03:18.386282       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.050248ms"
	I0617 11:03:18.386405       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.51µs"
	I0617 11:03:18.522300       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.759806ms"
	I0617 11:03:18.522420       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.514µs"
	I0617 11:03:51.707736       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-064080-m04\" does not exist"
	I0617 11:03:51.750452       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-064080-m04" podCIDRs=["10.244.3.0/24"]
	I0617 11:03:54.930421       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-064080-m04"
	I0617 11:03:58.795377       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-064080-m04"
	I0617 11:04:59.855604       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-064080-m04"
	I0617 11:04:59.909559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.648362ms"
	I0617 11:04:59.909950       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="240.965µs"
	
	
	==> kube-proxy [8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb] <==
	I0617 11:00:45.839974       1 server_linux.go:69] "Using iptables proxy"
	I0617 11:00:45.852134       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.134"]
	I0617 11:00:45.900351       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0617 11:00:45.900415       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0617 11:00:45.900431       1 server_linux.go:165] "Using iptables Proxier"
	I0617 11:00:45.903094       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 11:00:45.903378       1 server.go:872] "Version info" version="v1.30.1"
	I0617 11:00:45.903428       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:00:45.904665       1 config.go:192] "Starting service config controller"
	I0617 11:00:45.904719       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 11:00:45.904750       1 config.go:101] "Starting endpoint slice config controller"
	I0617 11:00:45.904754       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0617 11:00:45.905445       1 config.go:319] "Starting node config controller"
	I0617 11:00:45.905486       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 11:00:46.004818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0617 11:00:46.004906       1 shared_informer.go:320] Caches are synced for service config
	I0617 11:00:46.006354       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328] <==
	I0617 11:03:15.919486       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="9d6036a9-d1e4-4f26-b6e9-e2c4fcaedace" pod="default/busybox-fc5497c4f-gf9j7" assumedNode="ha-064080-m02" currentNode="ha-064080-m03"
	E0617 11:03:15.930425       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-gf9j7\": pod busybox-fc5497c4f-gf9j7 is already assigned to node \"ha-064080-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-gf9j7" node="ha-064080-m03"
	E0617 11:03:15.930608       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9d6036a9-d1e4-4f26-b6e9-e2c4fcaedace(default/busybox-fc5497c4f-gf9j7) was assumed on ha-064080-m03 but assigned to ha-064080-m02" pod="default/busybox-fc5497c4f-gf9j7"
	E0617 11:03:15.930681       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-gf9j7\": pod busybox-fc5497c4f-gf9j7 is already assigned to node \"ha-064080-m02\"" pod="default/busybox-fc5497c4f-gf9j7"
	I0617 11:03:15.930741       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-gf9j7" node="ha-064080-m02"
	E0617 11:03:15.991764       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wbcxx\": pod busybox-fc5497c4f-wbcxx is already assigned to node \"ha-064080-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-wbcxx" node="ha-064080-m03"
	E0617 11:03:15.991917       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod edfb4a4d-9e05-4cbe-b0d9-f7a8c675ebff(default/busybox-fc5497c4f-wbcxx) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-wbcxx"
	E0617 11:03:15.991940       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wbcxx\": pod busybox-fc5497c4f-wbcxx is already assigned to node \"ha-064080-m03\"" pod="default/busybox-fc5497c4f-wbcxx"
	I0617 11:03:15.991961       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-wbcxx" node="ha-064080-m03"
	E0617 11:03:15.999490       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-89r9v\": pod busybox-fc5497c4f-89r9v is already assigned to node \"ha-064080\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-89r9v" node="ha-064080"
	E0617 11:03:15.999654       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f1a8712a-2ef7-4400-98c9-5cee97c0d721(default/busybox-fc5497c4f-89r9v) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-89r9v"
	E0617 11:03:16.001941       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-89r9v\": pod busybox-fc5497c4f-89r9v is already assigned to node \"ha-064080\"" pod="default/busybox-fc5497c4f-89r9v"
	I0617 11:03:16.002324       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-89r9v" node="ha-064080"
	E0617 11:03:16.265788       1 schedule_one.go:1072] "Error occurred" err="Pod default/busybox-fc5497c4f-4trmp is already present in the active queue" pod="default/busybox-fc5497c4f-4trmp"
	E0617 11:03:51.820684       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-bsscf\": pod kube-proxy-bsscf is already assigned to node \"ha-064080-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-bsscf" node="ha-064080-m04"
	E0617 11:03:51.820962       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 75b1d3a6-9828-4735-960f-8a8a2be059fb(kube-system/kube-proxy-bsscf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-bsscf"
	E0617 11:03:51.821011       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-bsscf\": pod kube-proxy-bsscf is already assigned to node \"ha-064080-m04\"" pod="kube-system/kube-proxy-bsscf"
	I0617 11:03:51.821096       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-bsscf" node="ha-064080-m04"
	E0617 11:03:51.826037       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pn664\": pod kindnet-pn664 is already assigned to node \"ha-064080-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-pn664" node="ha-064080-m04"
	E0617 11:03:51.826114       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 10fd4a11-f59e-4bed-b0aa-3b7989ff4517(kube-system/kindnet-pn664) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-pn664"
	E0617 11:03:51.826132       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pn664\": pod kindnet-pn664 is already assigned to node \"ha-064080-m04\"" pod="kube-system/kindnet-pn664"
	I0617 11:03:51.826161       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-pn664" node="ha-064080-m04"
	E0617 11:03:51.875594       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5vzgd\": pod kindnet-5vzgd is already assigned to node \"ha-064080-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5vzgd" node="ha-064080-m04"
	E0617 11:03:51.875808       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5vzgd\": pod kindnet-5vzgd is already assigned to node \"ha-064080-m04\"" pod="kube-system/kindnet-5vzgd"
	I0617 11:03:51.876183       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5vzgd" node="ha-064080-m04"
	
	
	==> kubelet <==
	Jun 17 11:02:32 ha-064080 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:02:32 ha-064080 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 11:03:15 ha-064080 kubelet[1371]: I0617 11:03:15.972036    1371 topology_manager.go:215] "Topology Admit Handler" podUID="f1a8712a-2ef7-4400-98c9-5cee97c0d721" podNamespace="default" podName="busybox-fc5497c4f-89r9v"
	Jun 17 11:03:16 ha-064080 kubelet[1371]: I0617 11:03:16.051031    1371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhxxm\" (UniqueName: \"kubernetes.io/projected/f1a8712a-2ef7-4400-98c9-5cee97c0d721-kube-api-access-qhxxm\") pod \"busybox-fc5497c4f-89r9v\" (UID: \"f1a8712a-2ef7-4400-98c9-5cee97c0d721\") " pod="default/busybox-fc5497c4f-89r9v"
	Jun 17 11:03:17 ha-064080 kubelet[1371]: I0617 11:03:17.866824    1371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-89r9v" podStartSLOduration=1.973490486 podStartE2EDuration="2.866772972s" podCreationTimestamp="2024-06-17 11:03:15 +0000 UTC" firstStartedPulling="2024-06-17 11:03:16.541191008 +0000 UTC m=+164.605251363" lastFinishedPulling="2024-06-17 11:03:17.434473493 +0000 UTC m=+165.498533849" observedRunningTime="2024-06-17 11:03:17.865962484 +0000 UTC m=+165.930022862" watchObservedRunningTime="2024-06-17 11:03:17.866772972 +0000 UTC m=+165.930833345"
	Jun 17 11:03:32 ha-064080 kubelet[1371]: E0617 11:03:32.162673    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 11:03:32 ha-064080 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 11:03:32 ha-064080 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 11:03:32 ha-064080 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:03:32 ha-064080 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 11:04:32 ha-064080 kubelet[1371]: E0617 11:04:32.163177    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 11:04:32 ha-064080 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 11:04:32 ha-064080 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 11:04:32 ha-064080 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:04:32 ha-064080 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 11:05:32 ha-064080 kubelet[1371]: E0617 11:05:32.161780    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 11:05:32 ha-064080 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 11:05:32 ha-064080 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 11:05:32 ha-064080 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:05:32 ha-064080 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 11:06:32 ha-064080 kubelet[1371]: E0617 11:06:32.161185    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 11:06:32 ha-064080 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 11:06:32 ha-064080 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 11:06:32 ha-064080 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:06:32 ha-064080 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-064080 -n ha-064080
helpers_test.go:261: (dbg) Run:  kubectl --context ha-064080 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (61.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr: exit status 3 (3.204817175s)

                                                
                                                
-- stdout --
	ha-064080
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-064080-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:06:44.269716  135301 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:06:44.269957  135301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:06:44.269966  135301 out.go:304] Setting ErrFile to fd 2...
	I0617 11:06:44.269969  135301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:06:44.270165  135301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:06:44.270324  135301 out.go:298] Setting JSON to false
	I0617 11:06:44.270347  135301 mustload.go:65] Loading cluster: ha-064080
	I0617 11:06:44.270402  135301 notify.go:220] Checking for updates...
	I0617 11:06:44.270739  135301 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:06:44.270760  135301 status.go:255] checking status of ha-064080 ...
	I0617 11:06:44.271231  135301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:44.271295  135301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:44.287152  135301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42795
	I0617 11:06:44.287608  135301 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:44.288267  135301 main.go:141] libmachine: Using API Version  1
	I0617 11:06:44.288304  135301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:44.288648  135301 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:44.288835  135301 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:06:44.290514  135301 status.go:330] ha-064080 host status = "Running" (err=<nil>)
	I0617 11:06:44.290534  135301 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:06:44.290843  135301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:44.290877  135301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:44.305450  135301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45705
	I0617 11:06:44.305869  135301 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:44.306310  135301 main.go:141] libmachine: Using API Version  1
	I0617 11:06:44.306332  135301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:44.306689  135301 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:44.306898  135301 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:06:44.309768  135301 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:06:44.310195  135301 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:06:44.310229  135301 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:06:44.310352  135301 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:06:44.310647  135301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:44.310681  135301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:44.325279  135301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36005
	I0617 11:06:44.325814  135301 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:44.326335  135301 main.go:141] libmachine: Using API Version  1
	I0617 11:06:44.326355  135301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:44.326672  135301 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:44.326883  135301 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:06:44.327093  135301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:06:44.327116  135301 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:06:44.329776  135301 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:06:44.330143  135301 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:06:44.330179  135301 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:06:44.330289  135301 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:06:44.330471  135301 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:06:44.330625  135301 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:06:44.330765  135301 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:06:44.412214  135301 ssh_runner.go:195] Run: systemctl --version
	I0617 11:06:44.418781  135301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:06:44.437592  135301 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:06:44.437621  135301 api_server.go:166] Checking apiserver status ...
	I0617 11:06:44.437651  135301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:06:44.451574  135301 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup
	W0617 11:06:44.460882  135301 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:06:44.460936  135301 ssh_runner.go:195] Run: ls
	I0617 11:06:44.466062  135301 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:06:44.472079  135301 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:06:44.472099  135301 status.go:422] ha-064080 apiserver status = Running (err=<nil>)
	I0617 11:06:44.472124  135301 status.go:257] ha-064080 status: &{Name:ha-064080 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:06:44.472146  135301 status.go:255] checking status of ha-064080-m02 ...
	I0617 11:06:44.472527  135301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:44.472573  135301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:44.487701  135301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44081
	I0617 11:06:44.488061  135301 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:44.488469  135301 main.go:141] libmachine: Using API Version  1
	I0617 11:06:44.488484  135301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:44.488743  135301 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:44.488940  135301 main.go:141] libmachine: (ha-064080-m02) Calling .GetState
	I0617 11:06:44.490352  135301 status.go:330] ha-064080-m02 host status = "Running" (err=<nil>)
	I0617 11:06:44.490366  135301 host.go:66] Checking if "ha-064080-m02" exists ...
	I0617 11:06:44.490652  135301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:44.490692  135301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:44.506197  135301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44747
	I0617 11:06:44.506627  135301 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:44.507101  135301 main.go:141] libmachine: Using API Version  1
	I0617 11:06:44.507124  135301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:44.507425  135301 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:44.507603  135301 main.go:141] libmachine: (ha-064080-m02) Calling .GetIP
	I0617 11:06:44.510281  135301 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:06:44.510694  135301 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:06:44.510717  135301 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:06:44.510859  135301 host.go:66] Checking if "ha-064080-m02" exists ...
	I0617 11:06:44.511304  135301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:44.511354  135301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:44.525489  135301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42335
	I0617 11:06:44.525926  135301 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:44.526363  135301 main.go:141] libmachine: Using API Version  1
	I0617 11:06:44.526392  135301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:44.526722  135301 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:44.526870  135301 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:06:44.527033  135301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:06:44.527055  135301 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:06:44.529448  135301 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:06:44.529886  135301 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:06:44.529923  135301 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:06:44.530062  135301 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:06:44.530226  135301 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:06:44.530365  135301 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:06:44.530504  135301 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa Username:docker}
	W0617 11:06:47.083751  135301 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	W0617 11:06:47.083866  135301 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0617 11:06:47.083885  135301 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0617 11:06:47.083896  135301 status.go:257] ha-064080-m02 status: &{Name:ha-064080-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0617 11:06:47.083920  135301 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0617 11:06:47.083931  135301 status.go:255] checking status of ha-064080-m03 ...
	I0617 11:06:47.084363  135301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:47.084417  135301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:47.099594  135301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42465
	I0617 11:06:47.100013  135301 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:47.100438  135301 main.go:141] libmachine: Using API Version  1
	I0617 11:06:47.100461  135301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:47.100797  135301 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:47.100979  135301 main.go:141] libmachine: (ha-064080-m03) Calling .GetState
	I0617 11:06:47.102437  135301 status.go:330] ha-064080-m03 host status = "Running" (err=<nil>)
	I0617 11:06:47.102455  135301 host.go:66] Checking if "ha-064080-m03" exists ...
	I0617 11:06:47.102728  135301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:47.102763  135301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:47.118074  135301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33485
	I0617 11:06:47.118427  135301 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:47.118898  135301 main.go:141] libmachine: Using API Version  1
	I0617 11:06:47.118919  135301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:47.119261  135301 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:47.119467  135301 main.go:141] libmachine: (ha-064080-m03) Calling .GetIP
	I0617 11:06:47.122151  135301 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:06:47.122581  135301 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:06:47.122605  135301 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:06:47.122768  135301 host.go:66] Checking if "ha-064080-m03" exists ...
	I0617 11:06:47.123084  135301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:47.123134  135301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:47.138316  135301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36881
	I0617 11:06:47.138783  135301 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:47.139260  135301 main.go:141] libmachine: Using API Version  1
	I0617 11:06:47.139276  135301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:47.139544  135301 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:47.139695  135301 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:06:47.139898  135301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:06:47.139920  135301 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:06:47.142506  135301 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:06:47.142867  135301 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:06:47.142895  135301 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:06:47.143036  135301 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:06:47.143220  135301 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:06:47.143378  135301 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:06:47.143516  135301 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa Username:docker}
	I0617 11:06:47.224623  135301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:06:47.239611  135301 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:06:47.239635  135301 api_server.go:166] Checking apiserver status ...
	I0617 11:06:47.239666  135301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:06:47.252952  135301 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup
	W0617 11:06:47.266332  135301 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:06:47.266379  135301 ssh_runner.go:195] Run: ls
	I0617 11:06:47.270892  135301 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:06:47.276907  135301 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:06:47.276931  135301 status.go:422] ha-064080-m03 apiserver status = Running (err=<nil>)
	I0617 11:06:47.276942  135301 status.go:257] ha-064080-m03 status: &{Name:ha-064080-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:06:47.276976  135301 status.go:255] checking status of ha-064080-m04 ...
	I0617 11:06:47.277270  135301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:47.277312  135301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:47.292704  135301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34827
	I0617 11:06:47.293117  135301 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:47.293568  135301 main.go:141] libmachine: Using API Version  1
	I0617 11:06:47.293591  135301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:47.293927  135301 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:47.294138  135301 main.go:141] libmachine: (ha-064080-m04) Calling .GetState
	I0617 11:06:47.295717  135301 status.go:330] ha-064080-m04 host status = "Running" (err=<nil>)
	I0617 11:06:47.295738  135301 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:06:47.296028  135301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:47.296095  135301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:47.310690  135301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43829
	I0617 11:06:47.311100  135301 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:47.311646  135301 main.go:141] libmachine: Using API Version  1
	I0617 11:06:47.311667  135301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:47.311948  135301 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:47.312094  135301 main.go:141] libmachine: (ha-064080-m04) Calling .GetIP
	I0617 11:06:47.314719  135301 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:06:47.315136  135301 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:06:47.315166  135301 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:06:47.315299  135301 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:06:47.315604  135301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:47.315647  135301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:47.330661  135301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46647
	I0617 11:06:47.331091  135301 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:47.331563  135301 main.go:141] libmachine: Using API Version  1
	I0617 11:06:47.331591  135301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:47.331915  135301 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:47.332131  135301 main.go:141] libmachine: (ha-064080-m04) Calling .DriverName
	I0617 11:06:47.332315  135301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:06:47.332346  135301 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHHostname
	I0617 11:06:47.334659  135301 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:06:47.334960  135301 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:06:47.334981  135301 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:06:47.335157  135301 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHPort
	I0617 11:06:47.335311  135301 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHKeyPath
	I0617 11:06:47.335484  135301 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHUsername
	I0617 11:06:47.335647  135301 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m04/id_rsa Username:docker}
	I0617 11:06:47.414880  135301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:06:47.431311  135301 status.go:257] ha-064080-m04 status: &{Name:ha-064080-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr
E0617 11:06:51.169227  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr: exit status 3 (4.982715998s)

                                                
                                                
-- stdout --
	ha-064080
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-064080-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:06:48.637203  135402 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:06:48.637505  135402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:06:48.637520  135402 out.go:304] Setting ErrFile to fd 2...
	I0617 11:06:48.637527  135402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:06:48.637791  135402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:06:48.638039  135402 out.go:298] Setting JSON to false
	I0617 11:06:48.638065  135402 mustload.go:65] Loading cluster: ha-064080
	I0617 11:06:48.638173  135402 notify.go:220] Checking for updates...
	I0617 11:06:48.638533  135402 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:06:48.638550  135402 status.go:255] checking status of ha-064080 ...
	I0617 11:06:48.638964  135402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:48.639025  135402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:48.654932  135402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0617 11:06:48.655307  135402 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:48.655977  135402 main.go:141] libmachine: Using API Version  1
	I0617 11:06:48.656004  135402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:48.656327  135402 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:48.656505  135402 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:06:48.657860  135402 status.go:330] ha-064080 host status = "Running" (err=<nil>)
	I0617 11:06:48.657881  135402 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:06:48.658251  135402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:48.658293  135402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:48.673184  135402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38131
	I0617 11:06:48.673618  135402 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:48.674047  135402 main.go:141] libmachine: Using API Version  1
	I0617 11:06:48.674066  135402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:48.674382  135402 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:48.674649  135402 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:06:48.677400  135402 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:06:48.677907  135402 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:06:48.677936  135402 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:06:48.678069  135402 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:06:48.678355  135402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:48.678389  135402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:48.693284  135402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42527
	I0617 11:06:48.693683  135402 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:48.694115  135402 main.go:141] libmachine: Using API Version  1
	I0617 11:06:48.694137  135402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:48.694429  135402 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:48.694639  135402 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:06:48.694821  135402 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:06:48.694858  135402 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:06:48.697355  135402 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:06:48.697741  135402 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:06:48.697771  135402 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:06:48.697892  135402 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:06:48.698072  135402 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:06:48.698223  135402 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:06:48.698371  135402 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:06:48.780504  135402 ssh_runner.go:195] Run: systemctl --version
	I0617 11:06:48.786787  135402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:06:48.802841  135402 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:06:48.802871  135402 api_server.go:166] Checking apiserver status ...
	I0617 11:06:48.802905  135402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:06:48.819743  135402 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup
	W0617 11:06:48.828993  135402 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:06:48.829049  135402 ssh_runner.go:195] Run: ls
	I0617 11:06:48.833755  135402 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:06:48.838994  135402 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:06:48.839020  135402 status.go:422] ha-064080 apiserver status = Running (err=<nil>)
	I0617 11:06:48.839031  135402 status.go:257] ha-064080 status: &{Name:ha-064080 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:06:48.839048  135402 status.go:255] checking status of ha-064080-m02 ...
	I0617 11:06:48.839442  135402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:48.839496  135402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:48.854397  135402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38269
	I0617 11:06:48.854886  135402 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:48.855307  135402 main.go:141] libmachine: Using API Version  1
	I0617 11:06:48.855325  135402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:48.855677  135402 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:48.855854  135402 main.go:141] libmachine: (ha-064080-m02) Calling .GetState
	I0617 11:06:48.857578  135402 status.go:330] ha-064080-m02 host status = "Running" (err=<nil>)
	I0617 11:06:48.857600  135402 host.go:66] Checking if "ha-064080-m02" exists ...
	I0617 11:06:48.857944  135402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:48.857987  135402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:48.873345  135402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45625
	I0617 11:06:48.873771  135402 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:48.874242  135402 main.go:141] libmachine: Using API Version  1
	I0617 11:06:48.874266  135402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:48.874568  135402 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:48.874795  135402 main.go:141] libmachine: (ha-064080-m02) Calling .GetIP
	I0617 11:06:48.877370  135402 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:06:48.877811  135402 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:06:48.877847  135402 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:06:48.877988  135402 host.go:66] Checking if "ha-064080-m02" exists ...
	I0617 11:06:48.878298  135402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:48.878342  135402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:48.893392  135402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38429
	I0617 11:06:48.893900  135402 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:48.894393  135402 main.go:141] libmachine: Using API Version  1
	I0617 11:06:48.894418  135402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:48.894736  135402 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:48.894908  135402 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:06:48.895089  135402 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:06:48.895115  135402 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:06:48.897946  135402 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:06:48.898357  135402 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:06:48.898389  135402 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:06:48.898596  135402 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:06:48.898774  135402 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:06:48.898938  135402 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:06:48.899114  135402 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa Username:docker}
	W0617 11:06:50.151809  135402 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	I0617 11:06:50.151858  135402 retry.go:31] will retry after 248.157441ms: dial tcp 192.168.39.104:22: connect: no route to host
	W0617 11:06:53.227797  135402 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	W0617 11:06:53.227901  135402 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0617 11:06:53.227925  135402 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0617 11:06:53.227937  135402 status.go:257] ha-064080-m02 status: &{Name:ha-064080-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0617 11:06:53.227964  135402 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0617 11:06:53.227986  135402 status.go:255] checking status of ha-064080-m03 ...
	I0617 11:06:53.228318  135402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:53.228372  135402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:53.242998  135402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34019
	I0617 11:06:53.243432  135402 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:53.243893  135402 main.go:141] libmachine: Using API Version  1
	I0617 11:06:53.243916  135402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:53.244305  135402 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:53.244508  135402 main.go:141] libmachine: (ha-064080-m03) Calling .GetState
	I0617 11:06:53.245994  135402 status.go:330] ha-064080-m03 host status = "Running" (err=<nil>)
	I0617 11:06:53.246013  135402 host.go:66] Checking if "ha-064080-m03" exists ...
	I0617 11:06:53.246297  135402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:53.246334  135402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:53.260513  135402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41667
	I0617 11:06:53.260904  135402 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:53.261310  135402 main.go:141] libmachine: Using API Version  1
	I0617 11:06:53.261335  135402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:53.261668  135402 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:53.261881  135402 main.go:141] libmachine: (ha-064080-m03) Calling .GetIP
	I0617 11:06:53.264513  135402 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:06:53.264906  135402 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:06:53.264931  135402 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:06:53.265118  135402 host.go:66] Checking if "ha-064080-m03" exists ...
	I0617 11:06:53.265399  135402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:53.265436  135402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:53.279278  135402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45003
	I0617 11:06:53.279715  135402 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:53.280188  135402 main.go:141] libmachine: Using API Version  1
	I0617 11:06:53.280206  135402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:53.280485  135402 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:53.280683  135402 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:06:53.280861  135402 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:06:53.280880  135402 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:06:53.283580  135402 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:06:53.284017  135402 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:06:53.284054  135402 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:06:53.284151  135402 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:06:53.284311  135402 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:06:53.284465  135402 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:06:53.284592  135402 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa Username:docker}
	I0617 11:06:53.362840  135402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:06:53.380034  135402 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:06:53.380061  135402 api_server.go:166] Checking apiserver status ...
	I0617 11:06:53.380099  135402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:06:53.394163  135402 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup
	W0617 11:06:53.404146  135402 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:06:53.404193  135402 ssh_runner.go:195] Run: ls
	I0617 11:06:53.408916  135402 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:06:53.416405  135402 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:06:53.416422  135402 status.go:422] ha-064080-m03 apiserver status = Running (err=<nil>)
	I0617 11:06:53.416430  135402 status.go:257] ha-064080-m03 status: &{Name:ha-064080-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:06:53.416445  135402 status.go:255] checking status of ha-064080-m04 ...
	I0617 11:06:53.416771  135402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:53.416837  135402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:53.432302  135402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33509
	I0617 11:06:53.432701  135402 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:53.433150  135402 main.go:141] libmachine: Using API Version  1
	I0617 11:06:53.433169  135402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:53.433489  135402 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:53.433689  135402 main.go:141] libmachine: (ha-064080-m04) Calling .GetState
	I0617 11:06:53.435175  135402 status.go:330] ha-064080-m04 host status = "Running" (err=<nil>)
	I0617 11:06:53.435194  135402 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:06:53.435452  135402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:53.435517  135402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:53.452170  135402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0617 11:06:53.452658  135402 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:53.453112  135402 main.go:141] libmachine: Using API Version  1
	I0617 11:06:53.453130  135402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:53.453428  135402 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:53.453606  135402 main.go:141] libmachine: (ha-064080-m04) Calling .GetIP
	I0617 11:06:53.456065  135402 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:06:53.456434  135402 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:06:53.456466  135402 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:06:53.456558  135402 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:06:53.456873  135402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:53.456906  135402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:53.470710  135402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36029
	I0617 11:06:53.471046  135402 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:53.471511  135402 main.go:141] libmachine: Using API Version  1
	I0617 11:06:53.471534  135402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:53.471815  135402 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:53.472008  135402 main.go:141] libmachine: (ha-064080-m04) Calling .DriverName
	I0617 11:06:53.472206  135402 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:06:53.472225  135402 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHHostname
	I0617 11:06:53.474600  135402 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:06:53.474924  135402 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:06:53.474952  135402 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:06:53.475076  135402 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHPort
	I0617 11:06:53.475263  135402 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHKeyPath
	I0617 11:06:53.475399  135402 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHUsername
	I0617 11:06:53.475568  135402 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m04/id_rsa Username:docker}
	I0617 11:06:53.558431  135402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:06:53.572829  135402 status.go:257] ha-064080-m04 status: &{Name:ha-064080-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr: exit status 3 (4.332153602s)

                                                
                                                
-- stdout --
	ha-064080
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-064080-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:06:55.620900  135502 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:06:55.621021  135502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:06:55.621032  135502 out.go:304] Setting ErrFile to fd 2...
	I0617 11:06:55.621037  135502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:06:55.621212  135502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:06:55.621404  135502 out.go:298] Setting JSON to false
	I0617 11:06:55.621430  135502 mustload.go:65] Loading cluster: ha-064080
	I0617 11:06:55.621463  135502 notify.go:220] Checking for updates...
	I0617 11:06:55.621822  135502 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:06:55.621840  135502 status.go:255] checking status of ha-064080 ...
	I0617 11:06:55.622258  135502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:55.622322  135502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:55.638237  135502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40033
	I0617 11:06:55.638633  135502 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:55.639119  135502 main.go:141] libmachine: Using API Version  1
	I0617 11:06:55.639139  135502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:55.639510  135502 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:55.639684  135502 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:06:55.641168  135502 status.go:330] ha-064080 host status = "Running" (err=<nil>)
	I0617 11:06:55.641187  135502 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:06:55.641595  135502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:55.641641  135502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:55.655911  135502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38575
	I0617 11:06:55.656263  135502 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:55.656654  135502 main.go:141] libmachine: Using API Version  1
	I0617 11:06:55.656676  135502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:55.656987  135502 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:55.657150  135502 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:06:55.659862  135502 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:06:55.660271  135502 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:06:55.660305  135502 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:06:55.660368  135502 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:06:55.660647  135502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:55.660680  135502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:55.674803  135502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34551
	I0617 11:06:55.675209  135502 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:55.675576  135502 main.go:141] libmachine: Using API Version  1
	I0617 11:06:55.675593  135502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:55.675922  135502 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:55.676105  135502 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:06:55.676290  135502 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:06:55.676324  135502 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:06:55.678978  135502 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:06:55.679356  135502 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:06:55.679392  135502 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:06:55.679567  135502 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:06:55.679735  135502 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:06:55.679880  135502 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:06:55.680085  135502 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:06:55.754557  135502 ssh_runner.go:195] Run: systemctl --version
	I0617 11:06:55.761830  135502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:06:55.778718  135502 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:06:55.778748  135502 api_server.go:166] Checking apiserver status ...
	I0617 11:06:55.778778  135502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:06:55.793209  135502 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup
	W0617 11:06:55.802767  135502 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:06:55.802824  135502 ssh_runner.go:195] Run: ls
	I0617 11:06:55.807186  135502 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:06:55.813751  135502 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:06:55.813770  135502 status.go:422] ha-064080 apiserver status = Running (err=<nil>)
	I0617 11:06:55.813779  135502 status.go:257] ha-064080 status: &{Name:ha-064080 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:06:55.813800  135502 status.go:255] checking status of ha-064080-m02 ...
	I0617 11:06:55.814094  135502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:55.814126  135502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:55.828772  135502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39105
	I0617 11:06:55.829193  135502 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:55.829777  135502 main.go:141] libmachine: Using API Version  1
	I0617 11:06:55.829808  135502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:55.830183  135502 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:55.830390  135502 main.go:141] libmachine: (ha-064080-m02) Calling .GetState
	I0617 11:06:55.831831  135502 status.go:330] ha-064080-m02 host status = "Running" (err=<nil>)
	I0617 11:06:55.831851  135502 host.go:66] Checking if "ha-064080-m02" exists ...
	I0617 11:06:55.832177  135502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:55.832219  135502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:55.847014  135502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41707
	I0617 11:06:55.847423  135502 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:55.847879  135502 main.go:141] libmachine: Using API Version  1
	I0617 11:06:55.847894  135502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:55.848216  135502 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:55.848413  135502 main.go:141] libmachine: (ha-064080-m02) Calling .GetIP
	I0617 11:06:55.850795  135502 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:06:55.851172  135502 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:06:55.851199  135502 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:06:55.851317  135502 host.go:66] Checking if "ha-064080-m02" exists ...
	I0617 11:06:55.851656  135502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:55.851698  135502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:55.865843  135502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46521
	I0617 11:06:55.866257  135502 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:55.866723  135502 main.go:141] libmachine: Using API Version  1
	I0617 11:06:55.866744  135502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:55.867066  135502 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:55.867269  135502 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:06:55.867482  135502 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:06:55.867506  135502 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:06:55.869851  135502 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:06:55.870245  135502 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:06:55.870270  135502 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:06:55.870415  135502 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:06:55.870557  135502 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:06:55.870709  135502 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:06:55.870817  135502 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa Username:docker}
	W0617 11:06:56.295746  135502 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	I0617 11:06:56.295800  135502 retry.go:31] will retry after 199.014285ms: dial tcp 192.168.39.104:22: connect: no route to host
	W0617 11:06:59.559690  135502 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	W0617 11:06:59.559791  135502 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0617 11:06:59.559814  135502 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0617 11:06:59.559823  135502 status.go:257] ha-064080-m02 status: &{Name:ha-064080-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0617 11:06:59.559850  135502 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0617 11:06:59.559857  135502 status.go:255] checking status of ha-064080-m03 ...
	I0617 11:06:59.560178  135502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:59.560225  135502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:59.576312  135502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46757
	I0617 11:06:59.576831  135502 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:59.577400  135502 main.go:141] libmachine: Using API Version  1
	I0617 11:06:59.577427  135502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:59.577757  135502 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:59.577958  135502 main.go:141] libmachine: (ha-064080-m03) Calling .GetState
	I0617 11:06:59.579426  135502 status.go:330] ha-064080-m03 host status = "Running" (err=<nil>)
	I0617 11:06:59.579444  135502 host.go:66] Checking if "ha-064080-m03" exists ...
	I0617 11:06:59.579746  135502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:59.579790  135502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:59.593866  135502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33895
	I0617 11:06:59.594267  135502 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:59.594656  135502 main.go:141] libmachine: Using API Version  1
	I0617 11:06:59.594674  135502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:59.594977  135502 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:59.595135  135502 main.go:141] libmachine: (ha-064080-m03) Calling .GetIP
	I0617 11:06:59.597704  135502 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:06:59.598169  135502 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:06:59.598195  135502 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:06:59.598338  135502 host.go:66] Checking if "ha-064080-m03" exists ...
	I0617 11:06:59.598627  135502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:59.598661  135502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:59.612462  135502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34053
	I0617 11:06:59.612797  135502 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:59.613235  135502 main.go:141] libmachine: Using API Version  1
	I0617 11:06:59.613261  135502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:59.613565  135502 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:59.613749  135502 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:06:59.613942  135502 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:06:59.613969  135502 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:06:59.616355  135502 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:06:59.616783  135502 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:06:59.616820  135502 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:06:59.616939  135502 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:06:59.617127  135502 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:06:59.617283  135502 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:06:59.617421  135502 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa Username:docker}
	I0617 11:06:59.698778  135502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:06:59.714995  135502 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:06:59.715044  135502 api_server.go:166] Checking apiserver status ...
	I0617 11:06:59.715088  135502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:06:59.727813  135502 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup
	W0617 11:06:59.737536  135502 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:06:59.737572  135502 ssh_runner.go:195] Run: ls
	I0617 11:06:59.742031  135502 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:06:59.752627  135502 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:06:59.752654  135502 status.go:422] ha-064080-m03 apiserver status = Running (err=<nil>)
	I0617 11:06:59.752665  135502 status.go:257] ha-064080-m03 status: &{Name:ha-064080-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:06:59.752685  135502 status.go:255] checking status of ha-064080-m04 ...
	I0617 11:06:59.753105  135502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:59.753150  135502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:59.769009  135502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38937
	I0617 11:06:59.769476  135502 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:59.769971  135502 main.go:141] libmachine: Using API Version  1
	I0617 11:06:59.769992  135502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:59.770316  135502 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:59.770482  135502 main.go:141] libmachine: (ha-064080-m04) Calling .GetState
	I0617 11:06:59.772082  135502 status.go:330] ha-064080-m04 host status = "Running" (err=<nil>)
	I0617 11:06:59.772100  135502 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:06:59.772380  135502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:59.772419  135502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:59.786540  135502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43751
	I0617 11:06:59.786960  135502 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:59.787480  135502 main.go:141] libmachine: Using API Version  1
	I0617 11:06:59.787505  135502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:59.787772  135502 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:59.787979  135502 main.go:141] libmachine: (ha-064080-m04) Calling .GetIP
	I0617 11:06:59.790471  135502 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:06:59.790847  135502 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:06:59.790879  135502 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:06:59.790993  135502 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:06:59.791377  135502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:06:59.791429  135502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:06:59.805263  135502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0617 11:06:59.805700  135502 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:06:59.806194  135502 main.go:141] libmachine: Using API Version  1
	I0617 11:06:59.806216  135502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:06:59.806538  135502 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:06:59.806741  135502 main.go:141] libmachine: (ha-064080-m04) Calling .DriverName
	I0617 11:06:59.806969  135502 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:06:59.806989  135502 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHHostname
	I0617 11:06:59.809394  135502 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:06:59.809795  135502 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:06:59.809823  135502 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:06:59.809950  135502 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHPort
	I0617 11:06:59.810117  135502 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHKeyPath
	I0617 11:06:59.810256  135502 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHUsername
	I0617 11:06:59.810420  135502 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m04/id_rsa Username:docker}
	I0617 11:06:59.894757  135502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:06:59.909297  135502 status.go:257] ha-064080-m04 status: &{Name:ha-064080-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr: exit status 3 (4.824807306s)

                                                
                                                
-- stdout --
	ha-064080
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-064080-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:07:01.268410  135602 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:07:01.268541  135602 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:07:01.268552  135602 out.go:304] Setting ErrFile to fd 2...
	I0617 11:07:01.268558  135602 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:07:01.268742  135602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:07:01.269279  135602 out.go:298] Setting JSON to false
	I0617 11:07:01.269395  135602 mustload.go:65] Loading cluster: ha-064080
	I0617 11:07:01.269632  135602 notify.go:220] Checking for updates...
	I0617 11:07:01.270463  135602 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:07:01.270489  135602 status.go:255] checking status of ha-064080 ...
	I0617 11:07:01.270943  135602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:01.271019  135602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:01.286100  135602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38309
	I0617 11:07:01.286487  135602 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:01.287013  135602 main.go:141] libmachine: Using API Version  1
	I0617 11:07:01.287034  135602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:01.287372  135602 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:01.287609  135602 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:07:01.289375  135602 status.go:330] ha-064080 host status = "Running" (err=<nil>)
	I0617 11:07:01.289390  135602 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:07:01.289687  135602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:01.289727  135602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:01.304802  135602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42619
	I0617 11:07:01.305240  135602 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:01.305690  135602 main.go:141] libmachine: Using API Version  1
	I0617 11:07:01.305715  135602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:01.306006  135602 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:01.306192  135602 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:07:01.308933  135602 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:01.309321  135602 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:07:01.309350  135602 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:01.309472  135602 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:07:01.309746  135602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:01.309785  135602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:01.325270  135602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46215
	I0617 11:07:01.325658  135602 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:01.326179  135602 main.go:141] libmachine: Using API Version  1
	I0617 11:07:01.326202  135602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:01.326521  135602 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:01.326721  135602 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:07:01.326960  135602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:07:01.326992  135602 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:07:01.329628  135602 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:01.330031  135602 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:07:01.330058  135602 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:01.330233  135602 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:07:01.330385  135602 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:07:01.330515  135602 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:07:01.330667  135602 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:07:01.411665  135602 ssh_runner.go:195] Run: systemctl --version
	I0617 11:07:01.417898  135602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:07:01.433619  135602 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:07:01.433647  135602 api_server.go:166] Checking apiserver status ...
	I0617 11:07:01.433677  135602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:07:01.447605  135602 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup
	W0617 11:07:01.456604  135602 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:07:01.456654  135602 ssh_runner.go:195] Run: ls
	I0617 11:07:01.461000  135602 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:07:01.465720  135602 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:07:01.465742  135602 status.go:422] ha-064080 apiserver status = Running (err=<nil>)
	I0617 11:07:01.465754  135602 status.go:257] ha-064080 status: &{Name:ha-064080 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:07:01.465776  135602 status.go:255] checking status of ha-064080-m02 ...
	I0617 11:07:01.466163  135602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:01.466213  135602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:01.481160  135602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40971
	I0617 11:07:01.481637  135602 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:01.482155  135602 main.go:141] libmachine: Using API Version  1
	I0617 11:07:01.482179  135602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:01.482486  135602 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:01.482665  135602 main.go:141] libmachine: (ha-064080-m02) Calling .GetState
	I0617 11:07:01.484217  135602 status.go:330] ha-064080-m02 host status = "Running" (err=<nil>)
	I0617 11:07:01.484233  135602 host.go:66] Checking if "ha-064080-m02" exists ...
	I0617 11:07:01.484618  135602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:01.484665  135602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:01.500243  135602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36201
	I0617 11:07:01.500600  135602 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:01.501084  135602 main.go:141] libmachine: Using API Version  1
	I0617 11:07:01.501105  135602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:01.501426  135602 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:01.501617  135602 main.go:141] libmachine: (ha-064080-m02) Calling .GetIP
	I0617 11:07:01.504147  135602 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:07:01.504537  135602 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:07:01.504565  135602 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:07:01.504709  135602 host.go:66] Checking if "ha-064080-m02" exists ...
	I0617 11:07:01.505022  135602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:01.505065  135602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:01.519090  135602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39633
	I0617 11:07:01.519532  135602 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:01.520002  135602 main.go:141] libmachine: Using API Version  1
	I0617 11:07:01.520026  135602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:01.520371  135602 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:01.520570  135602 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:07:01.520753  135602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:07:01.520775  135602 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:07:01.523089  135602 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:07:01.523453  135602 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:07:01.523487  135602 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:07:01.523619  135602 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:07:01.523762  135602 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:07:01.523892  135602 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:07:01.524002  135602 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa Username:docker}
	W0617 11:07:02.631740  135602 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	I0617 11:07:02.631792  135602 retry.go:31] will retry after 252.185749ms: dial tcp 192.168.39.104:22: connect: no route to host
	W0617 11:07:05.703699  135602 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	W0617 11:07:05.703804  135602 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0617 11:07:05.703829  135602 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0617 11:07:05.703858  135602 status.go:257] ha-064080-m02 status: &{Name:ha-064080-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0617 11:07:05.703888  135602 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0617 11:07:05.703950  135602 status.go:255] checking status of ha-064080-m03 ...
	I0617 11:07:05.704299  135602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:05.704350  135602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:05.718840  135602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43025
	I0617 11:07:05.719294  135602 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:05.719824  135602 main.go:141] libmachine: Using API Version  1
	I0617 11:07:05.719855  135602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:05.720230  135602 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:05.720422  135602 main.go:141] libmachine: (ha-064080-m03) Calling .GetState
	I0617 11:07:05.722124  135602 status.go:330] ha-064080-m03 host status = "Running" (err=<nil>)
	I0617 11:07:05.722145  135602 host.go:66] Checking if "ha-064080-m03" exists ...
	I0617 11:07:05.722491  135602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:05.722560  135602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:05.736947  135602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33619
	I0617 11:07:05.737379  135602 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:05.737787  135602 main.go:141] libmachine: Using API Version  1
	I0617 11:07:05.737809  135602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:05.738158  135602 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:05.738374  135602 main.go:141] libmachine: (ha-064080-m03) Calling .GetIP
	I0617 11:07:05.741097  135602 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:05.741512  135602 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:07:05.741542  135602 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:05.741639  135602 host.go:66] Checking if "ha-064080-m03" exists ...
	I0617 11:07:05.742060  135602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:05.742108  135602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:05.757121  135602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41063
	I0617 11:07:05.757520  135602 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:05.757973  135602 main.go:141] libmachine: Using API Version  1
	I0617 11:07:05.757991  135602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:05.758321  135602 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:05.758487  135602 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:07:05.758674  135602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:07:05.758694  135602 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:07:05.761330  135602 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:05.761750  135602 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:07:05.761781  135602 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:05.761913  135602 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:07:05.762068  135602 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:07:05.762245  135602 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:07:05.762401  135602 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa Username:docker}
	I0617 11:07:05.847303  135602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:07:05.862165  135602 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:07:05.862190  135602 api_server.go:166] Checking apiserver status ...
	I0617 11:07:05.862223  135602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:07:05.876979  135602 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup
	W0617 11:07:05.886877  135602 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:07:05.886920  135602 ssh_runner.go:195] Run: ls
	I0617 11:07:05.891165  135602 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:07:05.895730  135602 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:07:05.895753  135602 status.go:422] ha-064080-m03 apiserver status = Running (err=<nil>)
	I0617 11:07:05.895762  135602 status.go:257] ha-064080-m03 status: &{Name:ha-064080-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:07:05.895776  135602 status.go:255] checking status of ha-064080-m04 ...
	I0617 11:07:05.896053  135602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:05.896124  135602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:05.911498  135602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44703
	I0617 11:07:05.911923  135602 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:05.912387  135602 main.go:141] libmachine: Using API Version  1
	I0617 11:07:05.912407  135602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:05.912699  135602 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:05.912915  135602 main.go:141] libmachine: (ha-064080-m04) Calling .GetState
	I0617 11:07:05.914531  135602 status.go:330] ha-064080-m04 host status = "Running" (err=<nil>)
	I0617 11:07:05.914549  135602 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:07:05.914825  135602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:05.914857  135602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:05.929219  135602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33995
	I0617 11:07:05.929605  135602 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:05.930032  135602 main.go:141] libmachine: Using API Version  1
	I0617 11:07:05.930053  135602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:05.930317  135602 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:05.930479  135602 main.go:141] libmachine: (ha-064080-m04) Calling .GetIP
	I0617 11:07:05.933010  135602 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:05.933417  135602 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:07:05.933447  135602 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:05.933530  135602 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:07:05.933801  135602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:05.933846  135602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:05.948920  135602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45413
	I0617 11:07:05.949307  135602 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:05.949740  135602 main.go:141] libmachine: Using API Version  1
	I0617 11:07:05.949763  135602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:05.950045  135602 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:05.950244  135602 main.go:141] libmachine: (ha-064080-m04) Calling .DriverName
	I0617 11:07:05.950430  135602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:07:05.950453  135602 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHHostname
	I0617 11:07:05.952881  135602 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:05.953283  135602 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:07:05.953304  135602 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:05.953443  135602 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHPort
	I0617 11:07:05.953600  135602 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHKeyPath
	I0617 11:07:05.953761  135602 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHUsername
	I0617 11:07:05.953915  135602 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m04/id_rsa Username:docker}
	I0617 11:07:06.034458  135602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:07:06.049267  135602 status.go:257] ha-064080-m04 status: &{Name:ha-064080-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr: exit status 3 (3.739935248s)

                                                
                                                
-- stdout --
	ha-064080
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-064080-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:07:09.120306  135718 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:07:09.120403  135718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:07:09.120411  135718 out.go:304] Setting ErrFile to fd 2...
	I0617 11:07:09.120415  135718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:07:09.120653  135718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:07:09.120823  135718 out.go:298] Setting JSON to false
	I0617 11:07:09.120845  135718 mustload.go:65] Loading cluster: ha-064080
	I0617 11:07:09.120914  135718 notify.go:220] Checking for updates...
	I0617 11:07:09.121251  135718 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:07:09.121270  135718 status.go:255] checking status of ha-064080 ...
	I0617 11:07:09.121717  135718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:09.121790  135718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:09.137314  135718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37989
	I0617 11:07:09.137752  135718 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:09.138402  135718 main.go:141] libmachine: Using API Version  1
	I0617 11:07:09.138425  135718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:09.138795  135718 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:09.139020  135718 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:07:09.140627  135718 status.go:330] ha-064080 host status = "Running" (err=<nil>)
	I0617 11:07:09.140643  135718 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:07:09.140912  135718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:09.140952  135718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:09.156542  135718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44711
	I0617 11:07:09.156879  135718 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:09.157291  135718 main.go:141] libmachine: Using API Version  1
	I0617 11:07:09.157324  135718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:09.157682  135718 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:09.157854  135718 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:07:09.160548  135718 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:09.161035  135718 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:07:09.161056  135718 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:09.161225  135718 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:07:09.161589  135718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:09.161634  135718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:09.176386  135718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34695
	I0617 11:07:09.176721  135718 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:09.177112  135718 main.go:141] libmachine: Using API Version  1
	I0617 11:07:09.177135  135718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:09.177424  135718 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:09.177603  135718 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:07:09.177793  135718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:07:09.177818  135718 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:07:09.180252  135718 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:09.180686  135718 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:07:09.180720  135718 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:09.180858  135718 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:07:09.181030  135718 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:07:09.181168  135718 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:07:09.181297  135718 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:07:09.259226  135718 ssh_runner.go:195] Run: systemctl --version
	I0617 11:07:09.266804  135718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:07:09.283739  135718 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:07:09.283766  135718 api_server.go:166] Checking apiserver status ...
	I0617 11:07:09.283800  135718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:07:09.297387  135718 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup
	W0617 11:07:09.307387  135718 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:07:09.307430  135718 ssh_runner.go:195] Run: ls
	I0617 11:07:09.311827  135718 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:07:09.318135  135718 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:07:09.318159  135718 status.go:422] ha-064080 apiserver status = Running (err=<nil>)
	I0617 11:07:09.318171  135718 status.go:257] ha-064080 status: &{Name:ha-064080 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:07:09.318194  135718 status.go:255] checking status of ha-064080-m02 ...
	I0617 11:07:09.318580  135718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:09.318616  135718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:09.333396  135718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46365
	I0617 11:07:09.333863  135718 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:09.334305  135718 main.go:141] libmachine: Using API Version  1
	I0617 11:07:09.334329  135718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:09.334657  135718 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:09.334828  135718 main.go:141] libmachine: (ha-064080-m02) Calling .GetState
	I0617 11:07:09.336441  135718 status.go:330] ha-064080-m02 host status = "Running" (err=<nil>)
	I0617 11:07:09.336460  135718 host.go:66] Checking if "ha-064080-m02" exists ...
	I0617 11:07:09.336860  135718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:09.336914  135718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:09.351629  135718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I0617 11:07:09.352013  135718 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:09.352488  135718 main.go:141] libmachine: Using API Version  1
	I0617 11:07:09.352524  135718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:09.352842  135718 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:09.353067  135718 main.go:141] libmachine: (ha-064080-m02) Calling .GetIP
	I0617 11:07:09.355725  135718 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:07:09.356130  135718 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:07:09.356170  135718 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:07:09.356275  135718 host.go:66] Checking if "ha-064080-m02" exists ...
	I0617 11:07:09.356600  135718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:09.356647  135718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:09.372118  135718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0617 11:07:09.372519  135718 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:09.372954  135718 main.go:141] libmachine: Using API Version  1
	I0617 11:07:09.372976  135718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:09.373334  135718 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:09.373547  135718 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:07:09.373731  135718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:07:09.373755  135718 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:07:09.376331  135718 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:07:09.376695  135718 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:07:09.376731  135718 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:07:09.376877  135718 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:07:09.377051  135718 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:07:09.377203  135718 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:07:09.377350  135718 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa Username:docker}
	W0617 11:07:12.455711  135718 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	W0617 11:07:12.455814  135718 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0617 11:07:12.455832  135718 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0617 11:07:12.455844  135718 status.go:257] ha-064080-m02 status: &{Name:ha-064080-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0617 11:07:12.455879  135718 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0617 11:07:12.455889  135718 status.go:255] checking status of ha-064080-m03 ...
	I0617 11:07:12.456227  135718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:12.456294  135718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:12.471401  135718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
	I0617 11:07:12.471870  135718 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:12.472352  135718 main.go:141] libmachine: Using API Version  1
	I0617 11:07:12.472375  135718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:12.472706  135718 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:12.472883  135718 main.go:141] libmachine: (ha-064080-m03) Calling .GetState
	I0617 11:07:12.474349  135718 status.go:330] ha-064080-m03 host status = "Running" (err=<nil>)
	I0617 11:07:12.474365  135718 host.go:66] Checking if "ha-064080-m03" exists ...
	I0617 11:07:12.474647  135718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:12.474682  135718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:12.489736  135718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39709
	I0617 11:07:12.490150  135718 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:12.490588  135718 main.go:141] libmachine: Using API Version  1
	I0617 11:07:12.490607  135718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:12.490916  135718 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:12.491100  135718 main.go:141] libmachine: (ha-064080-m03) Calling .GetIP
	I0617 11:07:12.493796  135718 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:12.494196  135718 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:07:12.494224  135718 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:12.494358  135718 host.go:66] Checking if "ha-064080-m03" exists ...
	I0617 11:07:12.494751  135718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:12.494795  135718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:12.509434  135718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34481
	I0617 11:07:12.510050  135718 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:12.510592  135718 main.go:141] libmachine: Using API Version  1
	I0617 11:07:12.510621  135718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:12.511011  135718 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:12.511191  135718 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:07:12.511381  135718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:07:12.511405  135718 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:07:12.514106  135718 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:12.514530  135718 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:07:12.514566  135718 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:12.514744  135718 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:07:12.514926  135718 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:07:12.515069  135718 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:07:12.515191  135718 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa Username:docker}
	I0617 11:07:12.595843  135718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:07:12.614886  135718 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:07:12.614921  135718 api_server.go:166] Checking apiserver status ...
	I0617 11:07:12.614960  135718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:07:12.632964  135718 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup
	W0617 11:07:12.643409  135718 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:07:12.643470  135718 ssh_runner.go:195] Run: ls
	I0617 11:07:12.647959  135718 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:07:12.654323  135718 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:07:12.654364  135718 status.go:422] ha-064080-m03 apiserver status = Running (err=<nil>)
	I0617 11:07:12.654377  135718 status.go:257] ha-064080-m03 status: &{Name:ha-064080-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:07:12.654398  135718 status.go:255] checking status of ha-064080-m04 ...
	I0617 11:07:12.654708  135718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:12.654744  135718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:12.670583  135718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35129
	I0617 11:07:12.671015  135718 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:12.671532  135718 main.go:141] libmachine: Using API Version  1
	I0617 11:07:12.671554  135718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:12.671863  135718 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:12.672035  135718 main.go:141] libmachine: (ha-064080-m04) Calling .GetState
	I0617 11:07:12.673481  135718 status.go:330] ha-064080-m04 host status = "Running" (err=<nil>)
	I0617 11:07:12.673497  135718 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:07:12.673812  135718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:12.673861  135718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:12.689441  135718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39127
	I0617 11:07:12.689830  135718 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:12.690265  135718 main.go:141] libmachine: Using API Version  1
	I0617 11:07:12.690287  135718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:12.690609  135718 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:12.690799  135718 main.go:141] libmachine: (ha-064080-m04) Calling .GetIP
	I0617 11:07:12.693449  135718 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:12.693812  135718 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:07:12.693843  135718 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:12.693968  135718 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:07:12.694332  135718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:12.694401  135718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:12.708970  135718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33615
	I0617 11:07:12.709341  135718 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:12.709751  135718 main.go:141] libmachine: Using API Version  1
	I0617 11:07:12.709782  135718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:12.710109  135718 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:12.710290  135718 main.go:141] libmachine: (ha-064080-m04) Calling .DriverName
	I0617 11:07:12.710491  135718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:07:12.710511  135718 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHHostname
	I0617 11:07:12.713307  135718 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:12.713684  135718 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:07:12.713720  135718 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:12.713893  135718 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHPort
	I0617 11:07:12.714055  135718 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHKeyPath
	I0617 11:07:12.714190  135718 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHUsername
	I0617 11:07:12.714353  135718 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m04/id_rsa Username:docker}
	I0617 11:07:12.798955  135718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:07:12.815404  135718 status.go:257] ha-064080-m04 status: &{Name:ha-064080-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr: exit status 3 (3.723816573s)

                                                
                                                
-- stdout --
	ha-064080
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-064080-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:07:18.988912  135835 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:07:18.989255  135835 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:07:18.989268  135835 out.go:304] Setting ErrFile to fd 2...
	I0617 11:07:18.989273  135835 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:07:18.989486  135835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:07:18.989644  135835 out.go:298] Setting JSON to false
	I0617 11:07:18.989666  135835 mustload.go:65] Loading cluster: ha-064080
	I0617 11:07:18.989771  135835 notify.go:220] Checking for updates...
	I0617 11:07:18.990031  135835 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:07:18.990047  135835 status.go:255] checking status of ha-064080 ...
	I0617 11:07:18.990481  135835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:18.990530  135835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:19.006064  135835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32839
	I0617 11:07:19.006565  135835 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:19.007137  135835 main.go:141] libmachine: Using API Version  1
	I0617 11:07:19.007158  135835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:19.007581  135835 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:19.007790  135835 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:07:19.009246  135835 status.go:330] ha-064080 host status = "Running" (err=<nil>)
	I0617 11:07:19.009262  135835 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:07:19.009536  135835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:19.009568  135835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:19.025391  135835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37577
	I0617 11:07:19.025882  135835 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:19.026498  135835 main.go:141] libmachine: Using API Version  1
	I0617 11:07:19.026530  135835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:19.026919  135835 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:19.027155  135835 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:07:19.030097  135835 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:19.030528  135835 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:07:19.030556  135835 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:19.030853  135835 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:07:19.031267  135835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:19.031316  135835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:19.045786  135835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43321
	I0617 11:07:19.046207  135835 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:19.046729  135835 main.go:141] libmachine: Using API Version  1
	I0617 11:07:19.046761  135835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:19.047095  135835 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:19.047300  135835 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:07:19.047544  135835 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:07:19.047574  135835 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:07:19.050933  135835 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:19.051431  135835 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:07:19.051546  135835 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:19.051747  135835 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:07:19.051936  135835 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:07:19.052110  135835 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:07:19.052260  135835 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:07:19.131765  135835 ssh_runner.go:195] Run: systemctl --version
	I0617 11:07:19.138410  135835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:07:19.154092  135835 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:07:19.154127  135835 api_server.go:166] Checking apiserver status ...
	I0617 11:07:19.154170  135835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:07:19.172021  135835 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup
	W0617 11:07:19.183501  135835 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:07:19.183551  135835 ssh_runner.go:195] Run: ls
	I0617 11:07:19.188689  135835 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:07:19.194399  135835 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:07:19.194424  135835 status.go:422] ha-064080 apiserver status = Running (err=<nil>)
	I0617 11:07:19.194446  135835 status.go:257] ha-064080 status: &{Name:ha-064080 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:07:19.194479  135835 status.go:255] checking status of ha-064080-m02 ...
	I0617 11:07:19.194874  135835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:19.194910  135835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:19.210487  135835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43249
	I0617 11:07:19.210887  135835 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:19.211395  135835 main.go:141] libmachine: Using API Version  1
	I0617 11:07:19.211414  135835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:19.211722  135835 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:19.211944  135835 main.go:141] libmachine: (ha-064080-m02) Calling .GetState
	I0617 11:07:19.213601  135835 status.go:330] ha-064080-m02 host status = "Running" (err=<nil>)
	I0617 11:07:19.213619  135835 host.go:66] Checking if "ha-064080-m02" exists ...
	I0617 11:07:19.213895  135835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:19.213927  135835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:19.228847  135835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34663
	I0617 11:07:19.229256  135835 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:19.229717  135835 main.go:141] libmachine: Using API Version  1
	I0617 11:07:19.229743  135835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:19.230108  135835 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:19.230296  135835 main.go:141] libmachine: (ha-064080-m02) Calling .GetIP
	I0617 11:07:19.232958  135835 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:07:19.233391  135835 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:07:19.233405  135835 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:07:19.233559  135835 host.go:66] Checking if "ha-064080-m02" exists ...
	I0617 11:07:19.233840  135835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:19.233877  135835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:19.250234  135835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46333
	I0617 11:07:19.250631  135835 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:19.251118  135835 main.go:141] libmachine: Using API Version  1
	I0617 11:07:19.251140  135835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:19.251506  135835 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:19.251722  135835 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:07:19.251954  135835 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:07:19.251976  135835 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:07:19.255056  135835 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:07:19.255521  135835 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:07:19.255560  135835 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:07:19.255736  135835 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:07:19.255914  135835 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:07:19.256107  135835 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:07:19.256284  135835 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa Username:docker}
	W0617 11:07:22.311736  135835 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	W0617 11:07:22.311846  135835 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0617 11:07:22.311864  135835 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0617 11:07:22.311874  135835 status.go:257] ha-064080-m02 status: &{Name:ha-064080-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0617 11:07:22.311899  135835 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0617 11:07:22.311910  135835 status.go:255] checking status of ha-064080-m03 ...
	I0617 11:07:22.312264  135835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:22.312344  135835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:22.327495  135835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35119
	I0617 11:07:22.327928  135835 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:22.328460  135835 main.go:141] libmachine: Using API Version  1
	I0617 11:07:22.328488  135835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:22.328853  135835 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:22.329074  135835 main.go:141] libmachine: (ha-064080-m03) Calling .GetState
	I0617 11:07:22.330686  135835 status.go:330] ha-064080-m03 host status = "Running" (err=<nil>)
	I0617 11:07:22.330707  135835 host.go:66] Checking if "ha-064080-m03" exists ...
	I0617 11:07:22.331014  135835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:22.331074  135835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:22.345119  135835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36601
	I0617 11:07:22.345430  135835 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:22.345936  135835 main.go:141] libmachine: Using API Version  1
	I0617 11:07:22.345979  135835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:22.346301  135835 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:22.346472  135835 main.go:141] libmachine: (ha-064080-m03) Calling .GetIP
	I0617 11:07:22.349481  135835 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:22.349957  135835 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:07:22.350008  135835 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:22.350151  135835 host.go:66] Checking if "ha-064080-m03" exists ...
	I0617 11:07:22.350448  135835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:22.350483  135835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:22.364798  135835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45799
	I0617 11:07:22.365261  135835 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:22.365723  135835 main.go:141] libmachine: Using API Version  1
	I0617 11:07:22.365742  135835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:22.366057  135835 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:22.366253  135835 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:07:22.366422  135835 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:07:22.366443  135835 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:07:22.369073  135835 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:22.369521  135835 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:07:22.369546  135835 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:22.369698  135835 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:07:22.369857  135835 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:07:22.369984  135835 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:07:22.370096  135835 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa Username:docker}
	I0617 11:07:22.453727  135835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:07:22.470720  135835 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:07:22.470749  135835 api_server.go:166] Checking apiserver status ...
	I0617 11:07:22.470782  135835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:07:22.485010  135835 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup
	W0617 11:07:22.495600  135835 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:07:22.495676  135835 ssh_runner.go:195] Run: ls
	I0617 11:07:22.500603  135835 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:07:22.506779  135835 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:07:22.506800  135835 status.go:422] ha-064080-m03 apiserver status = Running (err=<nil>)
	I0617 11:07:22.506813  135835 status.go:257] ha-064080-m03 status: &{Name:ha-064080-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:07:22.506831  135835 status.go:255] checking status of ha-064080-m04 ...
	I0617 11:07:22.507110  135835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:22.507148  135835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:22.522137  135835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I0617 11:07:22.522619  135835 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:22.523076  135835 main.go:141] libmachine: Using API Version  1
	I0617 11:07:22.523096  135835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:22.523444  135835 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:22.523626  135835 main.go:141] libmachine: (ha-064080-m04) Calling .GetState
	I0617 11:07:22.525150  135835 status.go:330] ha-064080-m04 host status = "Running" (err=<nil>)
	I0617 11:07:22.525169  135835 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:07:22.525461  135835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:22.525506  135835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:22.540558  135835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35055
	I0617 11:07:22.540907  135835 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:22.541322  135835 main.go:141] libmachine: Using API Version  1
	I0617 11:07:22.541346  135835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:22.541682  135835 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:22.541888  135835 main.go:141] libmachine: (ha-064080-m04) Calling .GetIP
	I0617 11:07:22.544601  135835 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:22.545036  135835 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:07:22.545073  135835 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:22.545215  135835 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:07:22.545608  135835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:22.545650  135835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:22.559591  135835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0617 11:07:22.560004  135835 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:22.560472  135835 main.go:141] libmachine: Using API Version  1
	I0617 11:07:22.560493  135835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:22.560826  135835 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:22.561018  135835 main.go:141] libmachine: (ha-064080-m04) Calling .DriverName
	I0617 11:07:22.561213  135835 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:07:22.561232  135835 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHHostname
	I0617 11:07:22.563868  135835 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:22.564324  135835 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:07:22.564344  135835 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:22.564451  135835 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHPort
	I0617 11:07:22.564638  135835 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHKeyPath
	I0617 11:07:22.564843  135835 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHUsername
	I0617 11:07:22.565018  135835 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m04/id_rsa Username:docker}
	I0617 11:07:22.651318  135835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:07:22.668087  135835 status.go:257] ha-064080-m04 status: &{Name:ha-064080-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr: exit status 7 (609.342906ms)

                                                
                                                
-- stdout --
	ha-064080
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-064080-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:07:27.954131  135972 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:07:27.954251  135972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:07:27.954262  135972 out.go:304] Setting ErrFile to fd 2...
	I0617 11:07:27.954266  135972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:07:27.954458  135972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:07:27.954689  135972 out.go:298] Setting JSON to false
	I0617 11:07:27.954715  135972 mustload.go:65] Loading cluster: ha-064080
	I0617 11:07:27.954809  135972 notify.go:220] Checking for updates...
	I0617 11:07:27.955419  135972 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:07:27.955441  135972 status.go:255] checking status of ha-064080 ...
	I0617 11:07:27.956085  135972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:27.956168  135972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:27.971081  135972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44793
	I0617 11:07:27.971569  135972 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:27.972162  135972 main.go:141] libmachine: Using API Version  1
	I0617 11:07:27.972186  135972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:27.972593  135972 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:27.972828  135972 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:07:27.974363  135972 status.go:330] ha-064080 host status = "Running" (err=<nil>)
	I0617 11:07:27.974382  135972 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:07:27.974662  135972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:27.974709  135972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:27.989363  135972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34519
	I0617 11:07:27.989696  135972 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:27.990084  135972 main.go:141] libmachine: Using API Version  1
	I0617 11:07:27.990107  135972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:27.990392  135972 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:27.990562  135972 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:07:27.992924  135972 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:27.993289  135972 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:07:27.993314  135972 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:27.993413  135972 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:07:27.993799  135972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:27.993849  135972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:28.008616  135972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33865
	I0617 11:07:28.009082  135972 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:28.009572  135972 main.go:141] libmachine: Using API Version  1
	I0617 11:07:28.009594  135972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:28.009904  135972 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:28.010074  135972 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:07:28.010282  135972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:07:28.010314  135972 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:07:28.012734  135972 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:28.013302  135972 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:07:28.013332  135972 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:28.013577  135972 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:07:28.013767  135972 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:07:28.013928  135972 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:07:28.014058  135972 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:07:28.092079  135972 ssh_runner.go:195] Run: systemctl --version
	I0617 11:07:28.099078  135972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:07:28.115148  135972 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:07:28.115186  135972 api_server.go:166] Checking apiserver status ...
	I0617 11:07:28.115225  135972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:07:28.131609  135972 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup
	W0617 11:07:28.140971  135972 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:07:28.141028  135972 ssh_runner.go:195] Run: ls
	I0617 11:07:28.145337  135972 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:07:28.149447  135972 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:07:28.149468  135972 status.go:422] ha-064080 apiserver status = Running (err=<nil>)
	I0617 11:07:28.149477  135972 status.go:257] ha-064080 status: &{Name:ha-064080 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:07:28.149492  135972 status.go:255] checking status of ha-064080-m02 ...
	I0617 11:07:28.149761  135972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:28.149793  135972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:28.164832  135972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37233
	I0617 11:07:28.165273  135972 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:28.165815  135972 main.go:141] libmachine: Using API Version  1
	I0617 11:07:28.165837  135972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:28.166198  135972 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:28.166402  135972 main.go:141] libmachine: (ha-064080-m02) Calling .GetState
	I0617 11:07:28.167965  135972 status.go:330] ha-064080-m02 host status = "Stopped" (err=<nil>)
	I0617 11:07:28.167994  135972 status.go:343] host is not running, skipping remaining checks
	I0617 11:07:28.168001  135972 status.go:257] ha-064080-m02 status: &{Name:ha-064080-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:07:28.168019  135972 status.go:255] checking status of ha-064080-m03 ...
	I0617 11:07:28.168291  135972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:28.168327  135972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:28.182592  135972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0617 11:07:28.182992  135972 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:28.183493  135972 main.go:141] libmachine: Using API Version  1
	I0617 11:07:28.183517  135972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:28.183807  135972 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:28.184021  135972 main.go:141] libmachine: (ha-064080-m03) Calling .GetState
	I0617 11:07:28.185318  135972 status.go:330] ha-064080-m03 host status = "Running" (err=<nil>)
	I0617 11:07:28.185340  135972 host.go:66] Checking if "ha-064080-m03" exists ...
	I0617 11:07:28.185604  135972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:28.185642  135972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:28.200458  135972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35865
	I0617 11:07:28.200810  135972 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:28.201285  135972 main.go:141] libmachine: Using API Version  1
	I0617 11:07:28.201306  135972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:28.201568  135972 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:28.201760  135972 main.go:141] libmachine: (ha-064080-m03) Calling .GetIP
	I0617 11:07:28.204119  135972 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:28.204569  135972 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:07:28.204612  135972 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:28.204722  135972 host.go:66] Checking if "ha-064080-m03" exists ...
	I0617 11:07:28.205134  135972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:28.205180  135972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:28.220256  135972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36715
	I0617 11:07:28.220744  135972 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:28.221272  135972 main.go:141] libmachine: Using API Version  1
	I0617 11:07:28.221299  135972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:28.221592  135972 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:28.221749  135972 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:07:28.221913  135972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:07:28.221932  135972 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:07:28.224535  135972 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:28.224909  135972 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:07:28.224934  135972 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:28.225084  135972 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:07:28.225270  135972 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:07:28.225464  135972 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:07:28.225597  135972 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa Username:docker}
	I0617 11:07:28.303120  135972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:07:28.318967  135972 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:07:28.318994  135972 api_server.go:166] Checking apiserver status ...
	I0617 11:07:28.319032  135972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:07:28.333917  135972 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup
	W0617 11:07:28.351542  135972 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:07:28.351591  135972 ssh_runner.go:195] Run: ls
	I0617 11:07:28.356629  135972 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:07:28.361458  135972 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:07:28.361481  135972 status.go:422] ha-064080-m03 apiserver status = Running (err=<nil>)
	I0617 11:07:28.361492  135972 status.go:257] ha-064080-m03 status: &{Name:ha-064080-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:07:28.361511  135972 status.go:255] checking status of ha-064080-m04 ...
	I0617 11:07:28.361922  135972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:28.361977  135972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:28.378311  135972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42185
	I0617 11:07:28.378754  135972 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:28.379258  135972 main.go:141] libmachine: Using API Version  1
	I0617 11:07:28.379283  135972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:28.379629  135972 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:28.379834  135972 main.go:141] libmachine: (ha-064080-m04) Calling .GetState
	I0617 11:07:28.381434  135972 status.go:330] ha-064080-m04 host status = "Running" (err=<nil>)
	I0617 11:07:28.381451  135972 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:07:28.381821  135972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:28.381865  135972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:28.397400  135972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42917
	I0617 11:07:28.397803  135972 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:28.398272  135972 main.go:141] libmachine: Using API Version  1
	I0617 11:07:28.398292  135972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:28.398605  135972 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:28.398831  135972 main.go:141] libmachine: (ha-064080-m04) Calling .GetIP
	I0617 11:07:28.401614  135972 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:28.402004  135972 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:07:28.402035  135972 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:28.402166  135972 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:07:28.402466  135972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:28.402510  135972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:28.416478  135972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44399
	I0617 11:07:28.416919  135972 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:28.417377  135972 main.go:141] libmachine: Using API Version  1
	I0617 11:07:28.417431  135972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:28.417714  135972 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:28.417911  135972 main.go:141] libmachine: (ha-064080-m04) Calling .DriverName
	I0617 11:07:28.418114  135972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:07:28.418138  135972 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHHostname
	I0617 11:07:28.420615  135972 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:28.421033  135972 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:07:28.421061  135972 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:28.421172  135972 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHPort
	I0617 11:07:28.421353  135972 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHKeyPath
	I0617 11:07:28.421514  135972 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHUsername
	I0617 11:07:28.421663  135972 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m04/id_rsa Username:docker}
	I0617 11:07:28.503228  135972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:07:28.518452  135972 status.go:257] ha-064080-m04 status: &{Name:ha-064080-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr: exit status 7 (605.157317ms)

                                                
                                                
-- stdout --
	ha-064080
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-064080-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:07:43.207960  136094 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:07:43.208095  136094 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:07:43.208105  136094 out.go:304] Setting ErrFile to fd 2...
	I0617 11:07:43.208110  136094 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:07:43.208763  136094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:07:43.209071  136094 out.go:298] Setting JSON to false
	I0617 11:07:43.209097  136094 mustload.go:65] Loading cluster: ha-064080
	I0617 11:07:43.209348  136094 notify.go:220] Checking for updates...
	I0617 11:07:43.209966  136094 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:07:43.209993  136094 status.go:255] checking status of ha-064080 ...
	I0617 11:07:43.210429  136094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:43.210468  136094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:43.225397  136094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37105
	I0617 11:07:43.225842  136094 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:43.226358  136094 main.go:141] libmachine: Using API Version  1
	I0617 11:07:43.226381  136094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:43.226710  136094 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:43.226930  136094 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:07:43.228882  136094 status.go:330] ha-064080 host status = "Running" (err=<nil>)
	I0617 11:07:43.228902  136094 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:07:43.229174  136094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:43.229215  136094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:43.244203  136094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46665
	I0617 11:07:43.244663  136094 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:43.245141  136094 main.go:141] libmachine: Using API Version  1
	I0617 11:07:43.245161  136094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:43.245435  136094 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:43.245609  136094 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:07:43.248401  136094 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:43.248823  136094 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:07:43.248853  136094 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:43.249019  136094 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:07:43.249286  136094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:43.249337  136094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:43.264159  136094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45627
	I0617 11:07:43.264515  136094 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:43.264920  136094 main.go:141] libmachine: Using API Version  1
	I0617 11:07:43.264937  136094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:43.265344  136094 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:43.265522  136094 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:07:43.265740  136094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:07:43.265772  136094 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:07:43.268424  136094 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:43.268842  136094 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:07:43.268869  136094 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:07:43.268991  136094 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:07:43.269159  136094 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:07:43.269298  136094 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:07:43.269399  136094 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:07:43.353578  136094 ssh_runner.go:195] Run: systemctl --version
	I0617 11:07:43.360385  136094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:07:43.379329  136094 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:07:43.379355  136094 api_server.go:166] Checking apiserver status ...
	I0617 11:07:43.379383  136094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:07:43.394077  136094 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup
	W0617 11:07:43.403883  136094 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:07:43.403934  136094 ssh_runner.go:195] Run: ls
	I0617 11:07:43.409292  136094 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:07:43.413718  136094 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:07:43.413737  136094 status.go:422] ha-064080 apiserver status = Running (err=<nil>)
	I0617 11:07:43.413746  136094 status.go:257] ha-064080 status: &{Name:ha-064080 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:07:43.413761  136094 status.go:255] checking status of ha-064080-m02 ...
	I0617 11:07:43.414036  136094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:43.414068  136094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:43.429173  136094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36025
	I0617 11:07:43.429564  136094 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:43.430034  136094 main.go:141] libmachine: Using API Version  1
	I0617 11:07:43.430052  136094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:43.430355  136094 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:43.430564  136094 main.go:141] libmachine: (ha-064080-m02) Calling .GetState
	I0617 11:07:43.432146  136094 status.go:330] ha-064080-m02 host status = "Stopped" (err=<nil>)
	I0617 11:07:43.432158  136094 status.go:343] host is not running, skipping remaining checks
	I0617 11:07:43.432164  136094 status.go:257] ha-064080-m02 status: &{Name:ha-064080-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:07:43.432178  136094 status.go:255] checking status of ha-064080-m03 ...
	I0617 11:07:43.432467  136094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:43.432505  136094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:43.447590  136094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44089
	I0617 11:07:43.448015  136094 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:43.448449  136094 main.go:141] libmachine: Using API Version  1
	I0617 11:07:43.448468  136094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:43.448823  136094 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:43.448977  136094 main.go:141] libmachine: (ha-064080-m03) Calling .GetState
	I0617 11:07:43.450501  136094 status.go:330] ha-064080-m03 host status = "Running" (err=<nil>)
	I0617 11:07:43.450521  136094 host.go:66] Checking if "ha-064080-m03" exists ...
	I0617 11:07:43.450869  136094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:43.450914  136094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:43.465842  136094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44225
	I0617 11:07:43.466207  136094 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:43.466615  136094 main.go:141] libmachine: Using API Version  1
	I0617 11:07:43.466639  136094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:43.466957  136094 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:43.467148  136094 main.go:141] libmachine: (ha-064080-m03) Calling .GetIP
	I0617 11:07:43.469814  136094 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:43.470221  136094 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:07:43.470244  136094 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:43.470359  136094 host.go:66] Checking if "ha-064080-m03" exists ...
	I0617 11:07:43.470644  136094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:43.470676  136094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:43.485948  136094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33793
	I0617 11:07:43.486348  136094 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:43.486876  136094 main.go:141] libmachine: Using API Version  1
	I0617 11:07:43.486893  136094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:43.487282  136094 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:43.487490  136094 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:07:43.487649  136094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:07:43.487672  136094 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:07:43.490645  136094 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:43.491155  136094 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:07:43.491186  136094 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:43.491425  136094 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:07:43.491635  136094 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:07:43.491820  136094 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:07:43.491987  136094 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa Username:docker}
	I0617 11:07:43.571518  136094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:07:43.586328  136094 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:07:43.586353  136094 api_server.go:166] Checking apiserver status ...
	I0617 11:07:43.586383  136094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:07:43.599512  136094 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup
	W0617 11:07:43.608715  136094 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:07:43.608777  136094 ssh_runner.go:195] Run: ls
	I0617 11:07:43.612915  136094 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:07:43.617098  136094 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:07:43.617121  136094 status.go:422] ha-064080-m03 apiserver status = Running (err=<nil>)
	I0617 11:07:43.617130  136094 status.go:257] ha-064080-m03 status: &{Name:ha-064080-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:07:43.617150  136094 status.go:255] checking status of ha-064080-m04 ...
	I0617 11:07:43.617424  136094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:43.617477  136094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:43.632727  136094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35387
	I0617 11:07:43.633136  136094 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:43.633653  136094 main.go:141] libmachine: Using API Version  1
	I0617 11:07:43.633674  136094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:43.633986  136094 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:43.634207  136094 main.go:141] libmachine: (ha-064080-m04) Calling .GetState
	I0617 11:07:43.635853  136094 status.go:330] ha-064080-m04 host status = "Running" (err=<nil>)
	I0617 11:07:43.635869  136094 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:07:43.636133  136094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:43.636163  136094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:43.650064  136094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42037
	I0617 11:07:43.650487  136094 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:43.650900  136094 main.go:141] libmachine: Using API Version  1
	I0617 11:07:43.650932  136094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:43.651211  136094 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:43.651380  136094 main.go:141] libmachine: (ha-064080-m04) Calling .GetIP
	I0617 11:07:43.654090  136094 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:43.654456  136094 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:07:43.654477  136094 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:43.654601  136094 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:07:43.654909  136094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:43.654940  136094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:43.668849  136094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44279
	I0617 11:07:43.669188  136094 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:43.669690  136094 main.go:141] libmachine: Using API Version  1
	I0617 11:07:43.669709  136094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:43.670042  136094 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:43.670227  136094 main.go:141] libmachine: (ha-064080-m04) Calling .DriverName
	I0617 11:07:43.670419  136094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:07:43.670441  136094 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHHostname
	I0617 11:07:43.672766  136094 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:43.673161  136094 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:07:43.673188  136094 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:43.673300  136094 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHPort
	I0617 11:07:43.673449  136094 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHKeyPath
	I0617 11:07:43.673599  136094 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHUsername
	I0617 11:07:43.673738  136094 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m04/id_rsa Username:docker}
	I0617 11:07:43.754787  136094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:07:43.769865  136094 status.go:257] ha-064080-m04 status: &{Name:ha-064080-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-064080 -n ha-064080
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-064080 logs -n 25: (1.38647635s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m03:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080:/home/docker/cp-test_ha-064080-m03_ha-064080.txt                       |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080 sudo cat                                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m03_ha-064080.txt                                 |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m03:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m02:/home/docker/cp-test_ha-064080-m03_ha-064080-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080-m02 sudo cat                                          | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m03_ha-064080-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m03:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04:/home/docker/cp-test_ha-064080-m03_ha-064080-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080-m04 sudo cat                                          | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m03_ha-064080-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-064080 cp testdata/cp-test.txt                                                | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4010822866/001/cp-test_ha-064080-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080:/home/docker/cp-test_ha-064080-m04_ha-064080.txt                       |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080 sudo cat                                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m04_ha-064080.txt                                 |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m02:/home/docker/cp-test_ha-064080-m04_ha-064080-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080-m02 sudo cat                                          | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m04_ha-064080-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m03:/home/docker/cp-test_ha-064080-m04_ha-064080-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080-m03 sudo cat                                          | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m04_ha-064080-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-064080 node stop m02 -v=7                                                     | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-064080 node start m02 -v=7                                                    | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 10:59:52
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 10:59:52.528854  130544 out.go:291] Setting OutFile to fd 1 ...
	I0617 10:59:52.529112  130544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 10:59:52.529122  130544 out.go:304] Setting ErrFile to fd 2...
	I0617 10:59:52.529126  130544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 10:59:52.529289  130544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 10:59:52.529863  130544 out.go:298] Setting JSON to false
	I0617 10:59:52.530769  130544 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2540,"bootTime":1718619453,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 10:59:52.530826  130544 start.go:139] virtualization: kvm guest
	I0617 10:59:52.532858  130544 out.go:177] * [ha-064080] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 10:59:52.534259  130544 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 10:59:52.534318  130544 notify.go:220] Checking for updates...
	I0617 10:59:52.535480  130544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 10:59:52.536966  130544 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 10:59:52.538645  130544 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 10:59:52.539950  130544 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 10:59:52.541126  130544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 10:59:52.542395  130544 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 10:59:52.577077  130544 out.go:177] * Using the kvm2 driver based on user configuration
	I0617 10:59:52.578302  130544 start.go:297] selected driver: kvm2
	I0617 10:59:52.578318  130544 start.go:901] validating driver "kvm2" against <nil>
	I0617 10:59:52.578333  130544 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 10:59:52.579044  130544 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 10:59:52.579144  130544 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 10:59:52.595008  130544 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 10:59:52.595079  130544 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 10:59:52.595275  130544 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 10:59:52.595343  130544 cni.go:84] Creating CNI manager for ""
	I0617 10:59:52.595359  130544 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0617 10:59:52.595367  130544 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0617 10:59:52.595447  130544 start.go:340] cluster config:
	{Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0617 10:59:52.595640  130544 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 10:59:52.597416  130544 out.go:177] * Starting "ha-064080" primary control-plane node in "ha-064080" cluster
	I0617 10:59:52.598543  130544 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 10:59:52.598574  130544 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 10:59:52.598584  130544 cache.go:56] Caching tarball of preloaded images
	I0617 10:59:52.598670  130544 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 10:59:52.598684  130544 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 10:59:52.598987  130544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 10:59:52.599010  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json: {Name:mk551857841548380a629a0aa2b54bb72637dca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 10:59:52.599155  130544 start.go:360] acquireMachinesLock for ha-064080: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 10:59:52.599191  130544 start.go:364] duration metric: took 19.793µs to acquireMachinesLock for "ha-064080"
	I0617 10:59:52.599213  130544 start.go:93] Provisioning new machine with config: &{Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 10:59:52.599267  130544 start.go:125] createHost starting for "" (driver="kvm2")
	I0617 10:59:52.600691  130544 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 10:59:52.600829  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:59:52.600876  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:59:52.614981  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I0617 10:59:52.615398  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:59:52.615959  130544 main.go:141] libmachine: Using API Version  1
	I0617 10:59:52.615985  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:59:52.616334  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:59:52.616509  130544 main.go:141] libmachine: (ha-064080) Calling .GetMachineName
	I0617 10:59:52.616668  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 10:59:52.616808  130544 start.go:159] libmachine.API.Create for "ha-064080" (driver="kvm2")
	I0617 10:59:52.616838  130544 client.go:168] LocalClient.Create starting
	I0617 10:59:52.616870  130544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem
	I0617 10:59:52.616902  130544 main.go:141] libmachine: Decoding PEM data...
	I0617 10:59:52.616917  130544 main.go:141] libmachine: Parsing certificate...
	I0617 10:59:52.616977  130544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem
	I0617 10:59:52.616994  130544 main.go:141] libmachine: Decoding PEM data...
	I0617 10:59:52.617007  130544 main.go:141] libmachine: Parsing certificate...
	I0617 10:59:52.617023  130544 main.go:141] libmachine: Running pre-create checks...
	I0617 10:59:52.617038  130544 main.go:141] libmachine: (ha-064080) Calling .PreCreateCheck
	I0617 10:59:52.617327  130544 main.go:141] libmachine: (ha-064080) Calling .GetConfigRaw
	I0617 10:59:52.617679  130544 main.go:141] libmachine: Creating machine...
	I0617 10:59:52.617692  130544 main.go:141] libmachine: (ha-064080) Calling .Create
	I0617 10:59:52.617804  130544 main.go:141] libmachine: (ha-064080) Creating KVM machine...
	I0617 10:59:52.618960  130544 main.go:141] libmachine: (ha-064080) DBG | found existing default KVM network
	I0617 10:59:52.619681  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:52.619532  130567 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015470}
	I0617 10:59:52.619699  130544 main.go:141] libmachine: (ha-064080) DBG | created network xml: 
	I0617 10:59:52.619708  130544 main.go:141] libmachine: (ha-064080) DBG | <network>
	I0617 10:59:52.619716  130544 main.go:141] libmachine: (ha-064080) DBG |   <name>mk-ha-064080</name>
	I0617 10:59:52.619725  130544 main.go:141] libmachine: (ha-064080) DBG |   <dns enable='no'/>
	I0617 10:59:52.619738  130544 main.go:141] libmachine: (ha-064080) DBG |   
	I0617 10:59:52.619753  130544 main.go:141] libmachine: (ha-064080) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0617 10:59:52.619764  130544 main.go:141] libmachine: (ha-064080) DBG |     <dhcp>
	I0617 10:59:52.619776  130544 main.go:141] libmachine: (ha-064080) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0617 10:59:52.619785  130544 main.go:141] libmachine: (ha-064080) DBG |     </dhcp>
	I0617 10:59:52.619802  130544 main.go:141] libmachine: (ha-064080) DBG |   </ip>
	I0617 10:59:52.619810  130544 main.go:141] libmachine: (ha-064080) DBG |   
	I0617 10:59:52.619823  130544 main.go:141] libmachine: (ha-064080) DBG | </network>
	I0617 10:59:52.619832  130544 main.go:141] libmachine: (ha-064080) DBG | 
	I0617 10:59:52.624606  130544 main.go:141] libmachine: (ha-064080) DBG | trying to create private KVM network mk-ha-064080 192.168.39.0/24...
	I0617 10:59:52.686986  130544 main.go:141] libmachine: (ha-064080) DBG | private KVM network mk-ha-064080 192.168.39.0/24 created
	I0617 10:59:52.687084  130544 main.go:141] libmachine: (ha-064080) Setting up store path in /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080 ...
	I0617 10:59:52.687126  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:52.686940  130567 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 10:59:52.687145  130544 main.go:141] libmachine: (ha-064080) Building disk image from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso
	I0617 10:59:52.687173  130544 main.go:141] libmachine: (ha-064080) Downloading /home/jenkins/minikube-integration/19084-112967/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0617 10:59:52.941345  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:52.941201  130567 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa...
	I0617 10:59:53.118166  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:53.118039  130567 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/ha-064080.rawdisk...
	I0617 10:59:53.118196  130544 main.go:141] libmachine: (ha-064080) DBG | Writing magic tar header
	I0617 10:59:53.118209  130544 main.go:141] libmachine: (ha-064080) DBG | Writing SSH key tar header
	I0617 10:59:53.118217  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:53.118171  130567 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080 ...
	I0617 10:59:53.118297  130544 main.go:141] libmachine: (ha-064080) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080
	I0617 10:59:53.118335  130544 main.go:141] libmachine: (ha-064080) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080 (perms=drwx------)
	I0617 10:59:53.118347  130544 main.go:141] libmachine: (ha-064080) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines
	I0617 10:59:53.118356  130544 main.go:141] libmachine: (ha-064080) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 10:59:53.118381  130544 main.go:141] libmachine: (ha-064080) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967
	I0617 10:59:53.118392  130544 main.go:141] libmachine: (ha-064080) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines (perms=drwxr-xr-x)
	I0617 10:59:53.118405  130544 main.go:141] libmachine: (ha-064080) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube (perms=drwxr-xr-x)
	I0617 10:59:53.118416  130544 main.go:141] libmachine: (ha-064080) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0617 10:59:53.118427  130544 main.go:141] libmachine: (ha-064080) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967 (perms=drwxrwxr-x)
	I0617 10:59:53.118441  130544 main.go:141] libmachine: (ha-064080) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0617 10:59:53.118450  130544 main.go:141] libmachine: (ha-064080) DBG | Checking permissions on dir: /home/jenkins
	I0617 10:59:53.118455  130544 main.go:141] libmachine: (ha-064080) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0617 10:59:53.118465  130544 main.go:141] libmachine: (ha-064080) Creating domain...
	I0617 10:59:53.118474  130544 main.go:141] libmachine: (ha-064080) DBG | Checking permissions on dir: /home
	I0617 10:59:53.118497  130544 main.go:141] libmachine: (ha-064080) DBG | Skipping /home - not owner
	I0617 10:59:53.119533  130544 main.go:141] libmachine: (ha-064080) define libvirt domain using xml: 
	I0617 10:59:53.119557  130544 main.go:141] libmachine: (ha-064080) <domain type='kvm'>
	I0617 10:59:53.119564  130544 main.go:141] libmachine: (ha-064080)   <name>ha-064080</name>
	I0617 10:59:53.119569  130544 main.go:141] libmachine: (ha-064080)   <memory unit='MiB'>2200</memory>
	I0617 10:59:53.119574  130544 main.go:141] libmachine: (ha-064080)   <vcpu>2</vcpu>
	I0617 10:59:53.119584  130544 main.go:141] libmachine: (ha-064080)   <features>
	I0617 10:59:53.119614  130544 main.go:141] libmachine: (ha-064080)     <acpi/>
	I0617 10:59:53.119638  130544 main.go:141] libmachine: (ha-064080)     <apic/>
	I0617 10:59:53.119646  130544 main.go:141] libmachine: (ha-064080)     <pae/>
	I0617 10:59:53.119658  130544 main.go:141] libmachine: (ha-064080)     
	I0617 10:59:53.119664  130544 main.go:141] libmachine: (ha-064080)   </features>
	I0617 10:59:53.119673  130544 main.go:141] libmachine: (ha-064080)   <cpu mode='host-passthrough'>
	I0617 10:59:53.119680  130544 main.go:141] libmachine: (ha-064080)   
	I0617 10:59:53.119691  130544 main.go:141] libmachine: (ha-064080)   </cpu>
	I0617 10:59:53.119699  130544 main.go:141] libmachine: (ha-064080)   <os>
	I0617 10:59:53.119711  130544 main.go:141] libmachine: (ha-064080)     <type>hvm</type>
	I0617 10:59:53.119733  130544 main.go:141] libmachine: (ha-064080)     <boot dev='cdrom'/>
	I0617 10:59:53.119750  130544 main.go:141] libmachine: (ha-064080)     <boot dev='hd'/>
	I0617 10:59:53.119756  130544 main.go:141] libmachine: (ha-064080)     <bootmenu enable='no'/>
	I0617 10:59:53.119760  130544 main.go:141] libmachine: (ha-064080)   </os>
	I0617 10:59:53.119766  130544 main.go:141] libmachine: (ha-064080)   <devices>
	I0617 10:59:53.119772  130544 main.go:141] libmachine: (ha-064080)     <disk type='file' device='cdrom'>
	I0617 10:59:53.119780  130544 main.go:141] libmachine: (ha-064080)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/boot2docker.iso'/>
	I0617 10:59:53.119787  130544 main.go:141] libmachine: (ha-064080)       <target dev='hdc' bus='scsi'/>
	I0617 10:59:53.119793  130544 main.go:141] libmachine: (ha-064080)       <readonly/>
	I0617 10:59:53.119797  130544 main.go:141] libmachine: (ha-064080)     </disk>
	I0617 10:59:53.119803  130544 main.go:141] libmachine: (ha-064080)     <disk type='file' device='disk'>
	I0617 10:59:53.119809  130544 main.go:141] libmachine: (ha-064080)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0617 10:59:53.119820  130544 main.go:141] libmachine: (ha-064080)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/ha-064080.rawdisk'/>
	I0617 10:59:53.119825  130544 main.go:141] libmachine: (ha-064080)       <target dev='hda' bus='virtio'/>
	I0617 10:59:53.119839  130544 main.go:141] libmachine: (ha-064080)     </disk>
	I0617 10:59:53.119846  130544 main.go:141] libmachine: (ha-064080)     <interface type='network'>
	I0617 10:59:53.119852  130544 main.go:141] libmachine: (ha-064080)       <source network='mk-ha-064080'/>
	I0617 10:59:53.119862  130544 main.go:141] libmachine: (ha-064080)       <model type='virtio'/>
	I0617 10:59:53.119888  130544 main.go:141] libmachine: (ha-064080)     </interface>
	I0617 10:59:53.119907  130544 main.go:141] libmachine: (ha-064080)     <interface type='network'>
	I0617 10:59:53.119926  130544 main.go:141] libmachine: (ha-064080)       <source network='default'/>
	I0617 10:59:53.119937  130544 main.go:141] libmachine: (ha-064080)       <model type='virtio'/>
	I0617 10:59:53.119948  130544 main.go:141] libmachine: (ha-064080)     </interface>
	I0617 10:59:53.119955  130544 main.go:141] libmachine: (ha-064080)     <serial type='pty'>
	I0617 10:59:53.119966  130544 main.go:141] libmachine: (ha-064080)       <target port='0'/>
	I0617 10:59:53.119974  130544 main.go:141] libmachine: (ha-064080)     </serial>
	I0617 10:59:53.119981  130544 main.go:141] libmachine: (ha-064080)     <console type='pty'>
	I0617 10:59:53.119995  130544 main.go:141] libmachine: (ha-064080)       <target type='serial' port='0'/>
	I0617 10:59:53.120007  130544 main.go:141] libmachine: (ha-064080)     </console>
	I0617 10:59:53.120014  130544 main.go:141] libmachine: (ha-064080)     <rng model='virtio'>
	I0617 10:59:53.120027  130544 main.go:141] libmachine: (ha-064080)       <backend model='random'>/dev/random</backend>
	I0617 10:59:53.120034  130544 main.go:141] libmachine: (ha-064080)     </rng>
	I0617 10:59:53.120043  130544 main.go:141] libmachine: (ha-064080)     
	I0617 10:59:53.120052  130544 main.go:141] libmachine: (ha-064080)     
	I0617 10:59:53.120060  130544 main.go:141] libmachine: (ha-064080)   </devices>
	I0617 10:59:53.120076  130544 main.go:141] libmachine: (ha-064080) </domain>
	I0617 10:59:53.120089  130544 main.go:141] libmachine: (ha-064080) 
	I0617 10:59:53.124410  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:78:87:13 in network default
	I0617 10:59:53.125015  130544 main.go:141] libmachine: (ha-064080) Ensuring networks are active...
	I0617 10:59:53.125038  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:53.125653  130544 main.go:141] libmachine: (ha-064080) Ensuring network default is active
	I0617 10:59:53.125926  130544 main.go:141] libmachine: (ha-064080) Ensuring network mk-ha-064080 is active
	I0617 10:59:53.126492  130544 main.go:141] libmachine: (ha-064080) Getting domain xml...
	I0617 10:59:53.127260  130544 main.go:141] libmachine: (ha-064080) Creating domain...
	I0617 10:59:54.293658  130544 main.go:141] libmachine: (ha-064080) Waiting to get IP...
	I0617 10:59:54.294561  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:54.294974  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 10:59:54.294999  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:54.294944  130567 retry.go:31] will retry after 218.859983ms: waiting for machine to come up
	I0617 10:59:54.515338  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:54.515862  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 10:59:54.515889  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:54.515829  130567 retry.go:31] will retry after 357.850554ms: waiting for machine to come up
	I0617 10:59:54.875426  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:54.875890  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 10:59:54.875913  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:54.875847  130567 retry.go:31] will retry after 313.568669ms: waiting for machine to come up
	I0617 10:59:55.191438  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:55.191919  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 10:59:55.191943  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:55.191873  130567 retry.go:31] will retry after 580.32994ms: waiting for machine to come up
	I0617 10:59:55.773570  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:55.774015  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 10:59:55.774040  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:55.773980  130567 retry.go:31] will retry after 642.58108ms: waiting for machine to come up
	I0617 10:59:56.417740  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:56.418140  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 10:59:56.418161  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:56.418094  130567 retry.go:31] will retry after 951.787863ms: waiting for machine to come up
	I0617 10:59:57.371206  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:57.371638  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 10:59:57.371682  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:57.371577  130567 retry.go:31] will retry after 1.042883837s: waiting for machine to come up
	I0617 10:59:58.416292  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:58.416658  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 10:59:58.416682  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:58.416625  130567 retry.go:31] will retry after 1.181180972s: waiting for machine to come up
	I0617 10:59:59.599938  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 10:59:59.600398  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 10:59:59.600428  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 10:59:59.600344  130567 retry.go:31] will retry after 1.538902549s: waiting for machine to come up
	I0617 11:00:01.141116  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:01.141638  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 11:00:01.141659  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 11:00:01.141589  130567 retry.go:31] will retry after 2.04090153s: waiting for machine to come up
	I0617 11:00:03.183660  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:03.184074  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 11:00:03.184096  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 11:00:03.184026  130567 retry.go:31] will retry after 2.563650396s: waiting for machine to come up
	I0617 11:00:05.748935  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:05.749403  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 11:00:05.749448  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 11:00:05.749353  130567 retry.go:31] will retry after 2.769265978s: waiting for machine to come up
	I0617 11:00:08.519638  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:08.520051  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find current IP address of domain ha-064080 in network mk-ha-064080
	I0617 11:00:08.520089  130544 main.go:141] libmachine: (ha-064080) DBG | I0617 11:00:08.520014  130567 retry.go:31] will retry after 4.435386884s: waiting for machine to come up
	I0617 11:00:12.957378  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:12.957863  130544 main.go:141] libmachine: (ha-064080) Found IP for machine: 192.168.39.134
	I0617 11:00:12.957887  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has current primary IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:12.957912  130544 main.go:141] libmachine: (ha-064080) Reserving static IP address...
	I0617 11:00:12.958238  130544 main.go:141] libmachine: (ha-064080) DBG | unable to find host DHCP lease matching {name: "ha-064080", mac: "52:54:00:bd:48:a9", ip: "192.168.39.134"} in network mk-ha-064080
	I0617 11:00:13.031149  130544 main.go:141] libmachine: (ha-064080) DBG | Getting to WaitForSSH function...
	I0617 11:00:13.031179  130544 main.go:141] libmachine: (ha-064080) Reserved static IP address: 192.168.39.134
	I0617 11:00:13.031191  130544 main.go:141] libmachine: (ha-064080) Waiting for SSH to be available...
	I0617 11:00:13.033670  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.034026  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:13.034054  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.034177  130544 main.go:141] libmachine: (ha-064080) DBG | Using SSH client type: external
	I0617 11:00:13.034206  130544 main.go:141] libmachine: (ha-064080) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa (-rw-------)
	I0617 11:00:13.034270  130544 main.go:141] libmachine: (ha-064080) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 11:00:13.034297  130544 main.go:141] libmachine: (ha-064080) DBG | About to run SSH command:
	I0617 11:00:13.034311  130544 main.go:141] libmachine: (ha-064080) DBG | exit 0
	I0617 11:00:13.155280  130544 main.go:141] libmachine: (ha-064080) DBG | SSH cmd err, output: <nil>: 
	I0617 11:00:13.155555  130544 main.go:141] libmachine: (ha-064080) KVM machine creation complete!
	I0617 11:00:13.155862  130544 main.go:141] libmachine: (ha-064080) Calling .GetConfigRaw
	I0617 11:00:13.156474  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:00:13.156701  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:00:13.156880  130544 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0617 11:00:13.156893  130544 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:00:13.158051  130544 main.go:141] libmachine: Detecting operating system of created instance...
	I0617 11:00:13.158065  130544 main.go:141] libmachine: Waiting for SSH to be available...
	I0617 11:00:13.158076  130544 main.go:141] libmachine: Getting to WaitForSSH function...
	I0617 11:00:13.158085  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:13.160281  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.160597  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:13.160619  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.160798  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:13.160981  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.161151  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.161309  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:13.161453  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:00:13.161673  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:00:13.161688  130544 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0617 11:00:13.258476  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:00:13.258500  130544 main.go:141] libmachine: Detecting the provisioner...
	I0617 11:00:13.258512  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:13.261174  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.261524  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:13.261548  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.261726  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:13.261928  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.262094  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.262253  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:13.262477  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:00:13.262664  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:00:13.262678  130544 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0617 11:00:13.359893  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0617 11:00:13.359971  130544 main.go:141] libmachine: found compatible host: buildroot
	I0617 11:00:13.359984  130544 main.go:141] libmachine: Provisioning with buildroot...
	I0617 11:00:13.359995  130544 main.go:141] libmachine: (ha-064080) Calling .GetMachineName
	I0617 11:00:13.360286  130544 buildroot.go:166] provisioning hostname "ha-064080"
	I0617 11:00:13.360311  130544 main.go:141] libmachine: (ha-064080) Calling .GetMachineName
	I0617 11:00:13.360509  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:13.363230  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.363608  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:13.363634  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.363765  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:13.363963  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.364125  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.364285  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:13.364430  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:00:13.364623  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:00:13.364642  130544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-064080 && echo "ha-064080" | sudo tee /etc/hostname
	I0617 11:00:13.473612  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-064080
	
	I0617 11:00:13.473642  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:13.476514  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.476832  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:13.476860  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.476984  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:13.477179  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.477339  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.477476  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:13.477665  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:00:13.477894  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:00:13.477922  130544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-064080' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-064080/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-064080' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 11:00:13.583510  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:00:13.583542  130544 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 11:00:13.583567  130544 buildroot.go:174] setting up certificates
	I0617 11:00:13.583582  130544 provision.go:84] configureAuth start
	I0617 11:00:13.583594  130544 main.go:141] libmachine: (ha-064080) Calling .GetMachineName
	I0617 11:00:13.583925  130544 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:00:13.586430  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.586719  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:13.586751  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.586923  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:13.589132  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.589469  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:13.589492  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.589588  130544 provision.go:143] copyHostCerts
	I0617 11:00:13.589618  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:00:13.589667  130544 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 11:00:13.589679  130544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:00:13.589754  130544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 11:00:13.589865  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:00:13.589901  130544 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 11:00:13.589909  130544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:00:13.589952  130544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 11:00:13.590013  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:00:13.590037  130544 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 11:00:13.590046  130544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:00:13.590079  130544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 11:00:13.590148  130544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.ha-064080 san=[127.0.0.1 192.168.39.134 ha-064080 localhost minikube]
	I0617 11:00:13.791780  130544 provision.go:177] copyRemoteCerts
	I0617 11:00:13.791852  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 11:00:13.791882  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:13.794250  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.794696  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:13.794727  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.794936  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:13.795138  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.795286  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:13.795412  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:00:13.873208  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0617 11:00:13.873276  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 11:00:13.896893  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0617 11:00:13.896955  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0617 11:00:13.919537  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0617 11:00:13.919597  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 11:00:13.946128  130544 provision.go:87] duration metric: took 362.536623ms to configureAuth
	I0617 11:00:13.946155  130544 buildroot.go:189] setting minikube options for container-runtime
	I0617 11:00:13.946339  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:00:13.946431  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:13.949013  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.949339  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:13.949375  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:13.949562  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:13.949769  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.949944  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:13.950096  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:13.950258  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:00:13.950456  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:00:13.950478  130544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 11:00:14.198305  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 11:00:14.198334  130544 main.go:141] libmachine: Checking connection to Docker...
	I0617 11:00:14.198346  130544 main.go:141] libmachine: (ha-064080) Calling .GetURL
	I0617 11:00:14.199776  130544 main.go:141] libmachine: (ha-064080) DBG | Using libvirt version 6000000
	I0617 11:00:14.202002  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.202321  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:14.202350  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.202499  130544 main.go:141] libmachine: Docker is up and running!
	I0617 11:00:14.202520  130544 main.go:141] libmachine: Reticulating splines...
	I0617 11:00:14.202528  130544 client.go:171] duration metric: took 21.585680233s to LocalClient.Create
	I0617 11:00:14.202554  130544 start.go:167] duration metric: took 21.58574405s to libmachine.API.Create "ha-064080"
	I0617 11:00:14.202568  130544 start.go:293] postStartSetup for "ha-064080" (driver="kvm2")
	I0617 11:00:14.202584  130544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 11:00:14.202605  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:00:14.202851  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 11:00:14.202880  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:14.204727  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.204994  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:14.205030  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.205109  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:14.205278  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:14.205465  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:14.205627  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:00:14.285511  130544 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 11:00:14.289676  130544 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 11:00:14.289694  130544 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 11:00:14.289743  130544 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 11:00:14.289821  130544 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 11:00:14.289839  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /etc/ssl/certs/1201742.pem
	I0617 11:00:14.289934  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 11:00:14.299105  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:00:14.323422  130544 start.go:296] duration metric: took 120.839325ms for postStartSetup
	I0617 11:00:14.323506  130544 main.go:141] libmachine: (ha-064080) Calling .GetConfigRaw
	I0617 11:00:14.324016  130544 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:00:14.326609  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.326944  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:14.326977  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.327221  130544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 11:00:14.327420  130544 start.go:128] duration metric: took 21.728142334s to createHost
	I0617 11:00:14.327478  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:14.329348  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.329643  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:14.329668  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.329772  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:14.329953  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:14.330100  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:14.330220  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:14.330338  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:00:14.330519  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:00:14.330530  130544 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 11:00:14.427834  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718622014.396175350
	
	I0617 11:00:14.427857  130544 fix.go:216] guest clock: 1718622014.396175350
	I0617 11:00:14.427866  130544 fix.go:229] Guest: 2024-06-17 11:00:14.39617535 +0000 UTC Remote: 2024-06-17 11:00:14.327433545 +0000 UTC m=+21.834352548 (delta=68.741805ms)
	I0617 11:00:14.427907  130544 fix.go:200] guest clock delta is within tolerance: 68.741805ms
	I0617 11:00:14.427914  130544 start.go:83] releasing machines lock for "ha-064080", held for 21.828711146s
	I0617 11:00:14.427937  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:00:14.428182  130544 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:00:14.430657  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.431015  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:14.431041  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.431234  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:00:14.431678  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:00:14.431853  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:00:14.431931  130544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 11:00:14.431982  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:14.432038  130544 ssh_runner.go:195] Run: cat /version.json
	I0617 11:00:14.432054  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:14.434678  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.434705  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.435070  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:14.435090  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:14.435107  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.435179  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:14.435287  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:14.435428  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:14.435501  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:14.435622  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:14.435665  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:14.435707  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:14.435792  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:00:14.435852  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:00:14.536390  130544 ssh_runner.go:195] Run: systemctl --version
	I0617 11:00:14.542037  130544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 11:00:14.701371  130544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 11:00:14.707793  130544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 11:00:14.707860  130544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 11:00:14.724194  130544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 11:00:14.724218  130544 start.go:494] detecting cgroup driver to use...
	I0617 11:00:14.724283  130544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 11:00:14.740006  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 11:00:14.753023  130544 docker.go:217] disabling cri-docker service (if available) ...
	I0617 11:00:14.753081  130544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 11:00:14.766015  130544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 11:00:14.779269  130544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 11:00:14.889108  130544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 11:00:15.049159  130544 docker.go:233] disabling docker service ...
	I0617 11:00:15.049238  130544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 11:00:15.063310  130544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 11:00:15.076558  130544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 11:00:15.201162  130544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 11:00:15.327111  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 11:00:15.340358  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 11:00:15.357998  130544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 11:00:15.358058  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:00:15.367989  130544 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 11:00:15.368042  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:00:15.378663  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:00:15.388920  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:00:15.399252  130544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 11:00:15.409785  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:00:15.420214  130544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:00:15.436592  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:00:15.446616  130544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 11:00:15.455878  130544 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 11:00:15.455928  130544 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 11:00:15.469051  130544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 11:00:15.478622  130544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:00:15.604029  130544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 11:00:15.733069  130544 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 11:00:15.733146  130544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 11:00:15.737701  130544 start.go:562] Will wait 60s for crictl version
	I0617 11:00:15.737744  130544 ssh_runner.go:195] Run: which crictl
	I0617 11:00:15.741699  130544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 11:00:15.786509  130544 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 11:00:15.786584  130544 ssh_runner.go:195] Run: crio --version
	I0617 11:00:15.815294  130544 ssh_runner.go:195] Run: crio --version
	I0617 11:00:15.846062  130544 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 11:00:15.847285  130544 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:00:15.850123  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:15.850409  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:15.850435  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:15.850703  130544 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 11:00:15.854686  130544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:00:15.867493  130544 kubeadm.go:877] updating cluster {Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 11:00:15.867611  130544 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:00:15.867674  130544 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:00:15.900267  130544 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 11:00:15.900336  130544 ssh_runner.go:195] Run: which lz4
	I0617 11:00:15.904085  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0617 11:00:15.904172  130544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 11:00:15.908327  130544 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 11:00:15.908351  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0617 11:00:17.280633  130544 crio.go:462] duration metric: took 1.376487827s to copy over tarball
	I0617 11:00:17.280705  130544 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 11:00:19.342719  130544 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.061986213s)
	I0617 11:00:19.342745  130544 crio.go:469] duration metric: took 2.062082096s to extract the tarball
	I0617 11:00:19.342754  130544 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 11:00:19.380180  130544 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:00:19.427157  130544 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 11:00:19.427183  130544 cache_images.go:84] Images are preloaded, skipping loading
	I0617 11:00:19.427191  130544 kubeadm.go:928] updating node { 192.168.39.134 8443 v1.30.1 crio true true} ...
	I0617 11:00:19.427309  130544 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-064080 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 11:00:19.427377  130544 ssh_runner.go:195] Run: crio config
	I0617 11:00:19.474982  130544 cni.go:84] Creating CNI manager for ""
	I0617 11:00:19.475005  130544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0617 11:00:19.475013  130544 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 11:00:19.475037  130544 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.134 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-064080 NodeName:ha-064080 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 11:00:19.475169  130544 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-064080"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 11:00:19.475194  130544 kube-vip.go:115] generating kube-vip config ...
	I0617 11:00:19.475236  130544 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0617 11:00:19.492093  130544 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0617 11:00:19.492199  130544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0617 11:00:19.492255  130544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 11:00:19.502761  130544 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 11:00:19.502844  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0617 11:00:19.512878  130544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0617 11:00:19.529967  130544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 11:00:19.547192  130544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0617 11:00:19.564583  130544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0617 11:00:19.581601  130544 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0617 11:00:19.585576  130544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:00:19.598182  130544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:00:19.718357  130544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:00:19.736665  130544 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080 for IP: 192.168.39.134
	I0617 11:00:19.736690  130544 certs.go:194] generating shared ca certs ...
	I0617 11:00:19.736705  130544 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:00:19.736861  130544 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 11:00:19.736897  130544 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 11:00:19.736904  130544 certs.go:256] generating profile certs ...
	I0617 11:00:19.736966  130544 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.key
	I0617 11:00:19.736980  130544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.crt with IP's: []
	I0617 11:00:19.798369  130544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.crt ...
	I0617 11:00:19.798400  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.crt: {Name:mk750201a7aa370c01c81c107eedf9ca2c411f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:00:19.798599  130544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.key ...
	I0617 11:00:19.798616  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.key: {Name:mk0346023acf5db2af27e34311b2764dba2a9d75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:00:19.798704  130544 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.0218256d
	I0617 11:00:19.798723  130544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.0218256d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.134 192.168.39.254]
	I0617 11:00:19.945551  130544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.0218256d ...
	I0617 11:00:19.945585  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.0218256d: {Name:mk42b1a801de6f8d9ad4890f002b4c0a7613c512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:00:19.945744  130544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.0218256d ...
	I0617 11:00:19.945757  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.0218256d: {Name:mk3721bc4c71e5ca11dd9b219e77fb6f8b99982c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:00:19.945830  130544 certs.go:381] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.0218256d -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt
	I0617 11:00:19.945922  130544 certs.go:385] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.0218256d -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key
	I0617 11:00:19.945979  130544 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key
	I0617 11:00:19.945998  130544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt with IP's: []
	I0617 11:00:20.081905  130544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt ...
	I0617 11:00:20.081937  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt: {Name:mk9b37d6c9d0db0266803d48f0885ded54b27bd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:00:20.082105  130544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key ...
	I0617 11:00:20.082116  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key: {Name:mkaf3993839ea939fd426b9486305c8a43e19b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:00:20.082192  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0617 11:00:20.082208  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0617 11:00:20.082219  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0617 11:00:20.082231  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0617 11:00:20.082244  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0617 11:00:20.082253  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0617 11:00:20.082265  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0617 11:00:20.082276  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0617 11:00:20.082323  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 11:00:20.082360  130544 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 11:00:20.082373  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 11:00:20.082399  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 11:00:20.082421  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 11:00:20.082443  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 11:00:20.082478  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:00:20.082503  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /usr/share/ca-certificates/1201742.pem
	I0617 11:00:20.082516  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:00:20.082528  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem -> /usr/share/ca-certificates/120174.pem
	I0617 11:00:20.083080  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 11:00:20.109804  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 11:00:20.134161  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 11:00:20.158133  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 11:00:20.182146  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0617 11:00:20.205947  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 11:00:20.230427  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 11:00:20.253949  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 11:00:20.278117  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 11:00:20.302374  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 11:00:20.325844  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 11:00:20.349600  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 11:00:20.366226  130544 ssh_runner.go:195] Run: openssl version
	I0617 11:00:20.371980  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 11:00:20.382844  130544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 11:00:20.387478  130544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 11:00:20.387532  130544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 11:00:20.393404  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 11:00:20.404325  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 11:00:20.415333  130544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:00:20.420040  130544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:00:20.420098  130544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:00:20.425813  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 11:00:20.436916  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 11:00:20.447945  130544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 11:00:20.452480  130544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 11:00:20.452534  130544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 11:00:20.458293  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 11:00:20.469064  130544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 11:00:20.473497  130544 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0617 11:00:20.473553  130544 kubeadm.go:391] StartCluster: {Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:00:20.473627  130544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 11:00:20.473689  130544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 11:00:20.516425  130544 cri.go:89] found id: ""
	I0617 11:00:20.516492  130544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0617 11:00:20.530239  130544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 11:00:20.549972  130544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 11:00:20.561360  130544 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 11:00:20.561377  130544 kubeadm.go:156] found existing configuration files:
	
	I0617 11:00:20.561424  130544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 11:00:20.571038  130544 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 11:00:20.571105  130544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 11:00:20.580915  130544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 11:00:20.590177  130544 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 11:00:20.590256  130544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 11:00:20.603031  130544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 11:00:20.612822  130544 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 11:00:20.612883  130544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 11:00:20.622808  130544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 11:00:20.632378  130544 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 11:00:20.632452  130544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 11:00:20.642439  130544 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 11:00:20.877231  130544 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 11:00:32.558734  130544 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0617 11:00:32.558836  130544 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 11:00:32.558965  130544 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 11:00:32.559112  130544 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 11:00:32.559261  130544 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 11:00:32.559359  130544 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 11:00:32.560814  130544 out.go:204]   - Generating certificates and keys ...
	I0617 11:00:32.560880  130544 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 11:00:32.560938  130544 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 11:00:32.560997  130544 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0617 11:00:32.561048  130544 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0617 11:00:32.561154  130544 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0617 11:00:32.561208  130544 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0617 11:00:32.561265  130544 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0617 11:00:32.561379  130544 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-064080 localhost] and IPs [192.168.39.134 127.0.0.1 ::1]
	I0617 11:00:32.561432  130544 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0617 11:00:32.561587  130544 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-064080 localhost] and IPs [192.168.39.134 127.0.0.1 ::1]
	I0617 11:00:32.561680  130544 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0617 11:00:32.561779  130544 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0617 11:00:32.561833  130544 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0617 11:00:32.561881  130544 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 11:00:32.561928  130544 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 11:00:32.561976  130544 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0617 11:00:32.562021  130544 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 11:00:32.562081  130544 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 11:00:32.562157  130544 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 11:00:32.562229  130544 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 11:00:32.562285  130544 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 11:00:32.563439  130544 out.go:204]   - Booting up control plane ...
	I0617 11:00:32.563569  130544 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 11:00:32.563676  130544 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 11:00:32.563778  130544 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 11:00:32.563901  130544 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 11:00:32.564034  130544 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 11:00:32.564097  130544 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 11:00:32.564252  130544 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0617 11:00:32.564347  130544 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0617 11:00:32.564430  130544 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.527316ms
	I0617 11:00:32.564528  130544 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0617 11:00:32.564619  130544 kubeadm.go:309] [api-check] The API server is healthy after 6.122928863s
	I0617 11:00:32.564755  130544 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0617 11:00:32.564911  130544 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0617 11:00:32.564993  130544 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0617 11:00:32.565173  130544 kubeadm.go:309] [mark-control-plane] Marking the node ha-064080 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0617 11:00:32.565252  130544 kubeadm.go:309] [bootstrap-token] Using token: wxs5l2.6ag2rr3bbqveig7f
	I0617 11:00:32.566527  130544 out.go:204]   - Configuring RBAC rules ...
	I0617 11:00:32.566637  130544 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0617 11:00:32.566731  130544 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0617 11:00:32.566881  130544 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0617 11:00:32.567052  130544 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0617 11:00:32.567181  130544 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0617 11:00:32.567288  130544 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0617 11:00:32.567433  130544 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0617 11:00:32.567511  130544 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0617 11:00:32.567588  130544 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0617 11:00:32.567603  130544 kubeadm.go:309] 
	I0617 11:00:32.567684  130544 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0617 11:00:32.567694  130544 kubeadm.go:309] 
	I0617 11:00:32.567827  130544 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0617 11:00:32.567844  130544 kubeadm.go:309] 
	I0617 11:00:32.567892  130544 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0617 11:00:32.567980  130544 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0617 11:00:32.568059  130544 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0617 11:00:32.568068  130544 kubeadm.go:309] 
	I0617 11:00:32.568142  130544 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0617 11:00:32.568152  130544 kubeadm.go:309] 
	I0617 11:00:32.568217  130544 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0617 11:00:32.568226  130544 kubeadm.go:309] 
	I0617 11:00:32.568309  130544 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0617 11:00:32.568406  130544 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0617 11:00:32.568495  130544 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0617 11:00:32.568502  130544 kubeadm.go:309] 
	I0617 11:00:32.568610  130544 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0617 11:00:32.568680  130544 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0617 11:00:32.568689  130544 kubeadm.go:309] 
	I0617 11:00:32.568761  130544 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token wxs5l2.6ag2rr3bbqveig7f \
	I0617 11:00:32.568857  130544 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 \
	I0617 11:00:32.568876  130544 kubeadm.go:309] 	--control-plane 
	I0617 11:00:32.568882  130544 kubeadm.go:309] 
	I0617 11:00:32.568949  130544 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0617 11:00:32.568955  130544 kubeadm.go:309] 
	I0617 11:00:32.569021  130544 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token wxs5l2.6ag2rr3bbqveig7f \
	I0617 11:00:32.569121  130544 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 
	I0617 11:00:32.569141  130544 cni.go:84] Creating CNI manager for ""
	I0617 11:00:32.569148  130544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0617 11:00:32.570410  130544 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0617 11:00:32.571516  130544 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0617 11:00:32.577463  130544 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0617 11:00:32.577481  130544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0617 11:00:32.597035  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0617 11:00:32.980805  130544 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 11:00:32.980891  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:32.980934  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-064080 minikube.k8s.io/updated_at=2024_06_17T11_00_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6 minikube.k8s.io/name=ha-064080 minikube.k8s.io/primary=true
	I0617 11:00:33.110457  130544 ops.go:34] apiserver oom_adj: -16
	I0617 11:00:33.125594  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:33.626142  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:34.125677  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:34.626603  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:35.126092  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:35.626262  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:36.126436  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:36.626174  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:37.125982  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:37.626388  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:38.125755  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:38.626674  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:39.126486  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:39.625651  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:40.126254  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:40.626057  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:41.126189  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:41.625956  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:42.126148  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:42.626217  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:43.125903  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:43.626135  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:44.126473  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:44.626010  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:00:44.729152  130544 kubeadm.go:1107] duration metric: took 11.748330999s to wait for elevateKubeSystemPrivileges
	W0617 11:00:44.729203  130544 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0617 11:00:44.729215  130544 kubeadm.go:393] duration metric: took 24.255665076s to StartCluster
	I0617 11:00:44.729238  130544 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:00:44.729318  130544 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:00:44.730242  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:00:44.730435  130544 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:00:44.730458  130544 start.go:240] waiting for startup goroutines ...
	I0617 11:00:44.730459  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0617 11:00:44.730473  130544 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 11:00:44.730532  130544 addons.go:69] Setting storage-provisioner=true in profile "ha-064080"
	I0617 11:00:44.730569  130544 addons.go:234] Setting addon storage-provisioner=true in "ha-064080"
	I0617 11:00:44.730577  130544 addons.go:69] Setting default-storageclass=true in profile "ha-064080"
	I0617 11:00:44.730603  130544 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:00:44.730623  130544 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-064080"
	I0617 11:00:44.730666  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:00:44.730968  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:00:44.730991  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:00:44.731010  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:00:44.731013  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:00:44.745761  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0617 11:00:44.745763  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43521
	I0617 11:00:44.746265  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:00:44.746323  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:00:44.746903  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:00:44.746934  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:00:44.747036  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:00:44.747065  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:00:44.747282  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:00:44.747490  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:00:44.747658  130544 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:00:44.747881  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:00:44.747933  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:00:44.749785  130544 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:00:44.750048  130544 kapi.go:59] client config for ha-064080: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.crt", KeyFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.key", CAFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfaf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0617 11:00:44.750526  130544 cert_rotation.go:137] Starting client certificate rotation controller
	I0617 11:00:44.750725  130544 addons.go:234] Setting addon default-storageclass=true in "ha-064080"
	I0617 11:00:44.750761  130544 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:00:44.751035  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:00:44.751075  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:00:44.762874  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46015
	I0617 11:00:44.763321  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:00:44.763884  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:00:44.763909  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:00:44.764325  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:00:44.764743  130544 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:00:44.765008  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39529
	I0617 11:00:44.765558  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:00:44.766117  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:00:44.766145  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:00:44.766417  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:00:44.766555  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:00:44.768596  130544 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 11:00:44.767060  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:00:44.769781  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:00:44.769861  130544 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 11:00:44.769880  130544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 11:00:44.769894  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:44.772698  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:44.773096  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:44.773123  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:44.773374  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:44.773564  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:44.773751  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:44.773927  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:00:44.784884  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I0617 11:00:44.785277  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:00:44.785703  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:00:44.785729  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:00:44.786106  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:00:44.786273  130544 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:00:44.787624  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:00:44.787831  130544 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 11:00:44.787848  130544 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 11:00:44.787868  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:00:44.790558  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:44.790960  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:00:44.790986  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:00:44.791159  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:00:44.791348  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:00:44.791524  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:00:44.791670  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:00:44.852414  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0617 11:00:44.911145  130544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 11:00:44.926112  130544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 11:00:45.306029  130544 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0617 11:00:45.306123  130544 main.go:141] libmachine: Making call to close driver server
	I0617 11:00:45.306143  130544 main.go:141] libmachine: (ha-064080) Calling .Close
	I0617 11:00:45.306413  130544 main.go:141] libmachine: Successfully made call to close driver server
	I0617 11:00:45.306432  130544 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 11:00:45.306448  130544 main.go:141] libmachine: Making call to close driver server
	I0617 11:00:45.306457  130544 main.go:141] libmachine: (ha-064080) Calling .Close
	I0617 11:00:45.306659  130544 main.go:141] libmachine: Successfully made call to close driver server
	I0617 11:00:45.306674  130544 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 11:00:45.306694  130544 main.go:141] libmachine: (ha-064080) DBG | Closing plugin on server side
	I0617 11:00:45.306819  130544 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0617 11:00:45.306835  130544 round_trippers.go:469] Request Headers:
	I0617 11:00:45.306847  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:00:45.306852  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:00:45.318232  130544 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0617 11:00:45.318811  130544 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0617 11:00:45.318824  130544 round_trippers.go:469] Request Headers:
	I0617 11:00:45.318832  130544 round_trippers.go:473]     Content-Type: application/json
	I0617 11:00:45.318836  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:00:45.318839  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:00:45.331975  130544 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0617 11:00:45.332472  130544 main.go:141] libmachine: Making call to close driver server
	I0617 11:00:45.332486  130544 main.go:141] libmachine: (ha-064080) Calling .Close
	I0617 11:00:45.332782  130544 main.go:141] libmachine: Successfully made call to close driver server
	I0617 11:00:45.332800  130544 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 11:00:45.332820  130544 main.go:141] libmachine: (ha-064080) DBG | Closing plugin on server side
	I0617 11:00:45.502038  130544 main.go:141] libmachine: Making call to close driver server
	I0617 11:00:45.502069  130544 main.go:141] libmachine: (ha-064080) Calling .Close
	I0617 11:00:45.502384  130544 main.go:141] libmachine: Successfully made call to close driver server
	I0617 11:00:45.502403  130544 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 11:00:45.502418  130544 main.go:141] libmachine: Making call to close driver server
	I0617 11:00:45.502426  130544 main.go:141] libmachine: (ha-064080) Calling .Close
	I0617 11:00:45.502692  130544 main.go:141] libmachine: Successfully made call to close driver server
	I0617 11:00:45.502708  130544 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 11:00:45.502727  130544 main.go:141] libmachine: (ha-064080) DBG | Closing plugin on server side
	I0617 11:00:45.505543  130544 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0617 11:00:45.506915  130544 addons.go:510] duration metric: took 776.432264ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0617 11:00:45.506950  130544 start.go:245] waiting for cluster config update ...
	I0617 11:00:45.506963  130544 start.go:254] writing updated cluster config ...
	I0617 11:00:45.508748  130544 out.go:177] 
	I0617 11:00:45.510038  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:00:45.510124  130544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 11:00:45.511639  130544 out.go:177] * Starting "ha-064080-m02" control-plane node in "ha-064080" cluster
	I0617 11:00:45.512603  130544 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:00:45.512621  130544 cache.go:56] Caching tarball of preloaded images
	I0617 11:00:45.512721  130544 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:00:45.512733  130544 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 11:00:45.512807  130544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 11:00:45.512966  130544 start.go:360] acquireMachinesLock for ha-064080-m02: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:00:45.513006  130544 start.go:364] duration metric: took 21.895µs to acquireMachinesLock for "ha-064080-m02"
	I0617 11:00:45.513028  130544 start.go:93] Provisioning new machine with config: &{Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:00:45.513122  130544 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0617 11:00:45.514311  130544 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 11:00:45.514388  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:00:45.514413  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:00:45.529044  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34831
	I0617 11:00:45.529542  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:00:45.530105  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:00:45.530141  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:00:45.530419  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:00:45.530594  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetMachineName
	I0617 11:00:45.530776  130544 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:00:45.530933  130544 start.go:159] libmachine.API.Create for "ha-064080" (driver="kvm2")
	I0617 11:00:45.530960  130544 client.go:168] LocalClient.Create starting
	I0617 11:00:45.531001  130544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem
	I0617 11:00:45.531059  130544 main.go:141] libmachine: Decoding PEM data...
	I0617 11:00:45.531090  130544 main.go:141] libmachine: Parsing certificate...
	I0617 11:00:45.531165  130544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem
	I0617 11:00:45.531193  130544 main.go:141] libmachine: Decoding PEM data...
	I0617 11:00:45.531207  130544 main.go:141] libmachine: Parsing certificate...
	I0617 11:00:45.531241  130544 main.go:141] libmachine: Running pre-create checks...
	I0617 11:00:45.531253  130544 main.go:141] libmachine: (ha-064080-m02) Calling .PreCreateCheck
	I0617 11:00:45.531411  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetConfigRaw
	I0617 11:00:45.531837  130544 main.go:141] libmachine: Creating machine...
	I0617 11:00:45.531855  130544 main.go:141] libmachine: (ha-064080-m02) Calling .Create
	I0617 11:00:45.531970  130544 main.go:141] libmachine: (ha-064080-m02) Creating KVM machine...
	I0617 11:00:45.533206  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found existing default KVM network
	I0617 11:00:45.533346  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found existing private KVM network mk-ha-064080
	I0617 11:00:45.533505  130544 main.go:141] libmachine: (ha-064080-m02) Setting up store path in /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02 ...
	I0617 11:00:45.533530  130544 main.go:141] libmachine: (ha-064080-m02) Building disk image from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso
	I0617 11:00:45.533630  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:45.533507  130923 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:00:45.533700  130544 main.go:141] libmachine: (ha-064080-m02) Downloading /home/jenkins/minikube-integration/19084-112967/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0617 11:00:45.779663  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:45.779433  130923 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa...
	I0617 11:00:46.125617  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:46.125475  130923 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/ha-064080-m02.rawdisk...
	I0617 11:00:46.125654  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Writing magic tar header
	I0617 11:00:46.125669  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Writing SSH key tar header
	I0617 11:00:46.125682  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:46.125589  130923 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02 ...
	I0617 11:00:46.125703  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02
	I0617 11:00:46.125776  130544 main.go:141] libmachine: (ha-064080-m02) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02 (perms=drwx------)
	I0617 11:00:46.125794  130544 main.go:141] libmachine: (ha-064080-m02) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines (perms=drwxr-xr-x)
	I0617 11:00:46.125815  130544 main.go:141] libmachine: (ha-064080-m02) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube (perms=drwxr-xr-x)
	I0617 11:00:46.125827  130544 main.go:141] libmachine: (ha-064080-m02) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967 (perms=drwxrwxr-x)
	I0617 11:00:46.125839  130544 main.go:141] libmachine: (ha-064080-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0617 11:00:46.125848  130544 main.go:141] libmachine: (ha-064080-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0617 11:00:46.125855  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines
	I0617 11:00:46.125865  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:00:46.125871  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967
	I0617 11:00:46.125877  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0617 11:00:46.125883  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Checking permissions on dir: /home/jenkins
	I0617 11:00:46.125888  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Checking permissions on dir: /home
	I0617 11:00:46.125895  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Skipping /home - not owner
	I0617 11:00:46.125905  130544 main.go:141] libmachine: (ha-064080-m02) Creating domain...
	I0617 11:00:46.126831  130544 main.go:141] libmachine: (ha-064080-m02) define libvirt domain using xml: 
	I0617 11:00:46.126856  130544 main.go:141] libmachine: (ha-064080-m02) <domain type='kvm'>
	I0617 11:00:46.126866  130544 main.go:141] libmachine: (ha-064080-m02)   <name>ha-064080-m02</name>
	I0617 11:00:46.126879  130544 main.go:141] libmachine: (ha-064080-m02)   <memory unit='MiB'>2200</memory>
	I0617 11:00:46.126886  130544 main.go:141] libmachine: (ha-064080-m02)   <vcpu>2</vcpu>
	I0617 11:00:46.126898  130544 main.go:141] libmachine: (ha-064080-m02)   <features>
	I0617 11:00:46.126920  130544 main.go:141] libmachine: (ha-064080-m02)     <acpi/>
	I0617 11:00:46.126931  130544 main.go:141] libmachine: (ha-064080-m02)     <apic/>
	I0617 11:00:46.126939  130544 main.go:141] libmachine: (ha-064080-m02)     <pae/>
	I0617 11:00:46.126946  130544 main.go:141] libmachine: (ha-064080-m02)     
	I0617 11:00:46.126955  130544 main.go:141] libmachine: (ha-064080-m02)   </features>
	I0617 11:00:46.126961  130544 main.go:141] libmachine: (ha-064080-m02)   <cpu mode='host-passthrough'>
	I0617 11:00:46.126973  130544 main.go:141] libmachine: (ha-064080-m02)   
	I0617 11:00:46.126983  130544 main.go:141] libmachine: (ha-064080-m02)   </cpu>
	I0617 11:00:46.126992  130544 main.go:141] libmachine: (ha-064080-m02)   <os>
	I0617 11:00:46.127003  130544 main.go:141] libmachine: (ha-064080-m02)     <type>hvm</type>
	I0617 11:00:46.127012  130544 main.go:141] libmachine: (ha-064080-m02)     <boot dev='cdrom'/>
	I0617 11:00:46.127026  130544 main.go:141] libmachine: (ha-064080-m02)     <boot dev='hd'/>
	I0617 11:00:46.127038  130544 main.go:141] libmachine: (ha-064080-m02)     <bootmenu enable='no'/>
	I0617 11:00:46.127047  130544 main.go:141] libmachine: (ha-064080-m02)   </os>
	I0617 11:00:46.127054  130544 main.go:141] libmachine: (ha-064080-m02)   <devices>
	I0617 11:00:46.127059  130544 main.go:141] libmachine: (ha-064080-m02)     <disk type='file' device='cdrom'>
	I0617 11:00:46.127071  130544 main.go:141] libmachine: (ha-064080-m02)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/boot2docker.iso'/>
	I0617 11:00:46.127083  130544 main.go:141] libmachine: (ha-064080-m02)       <target dev='hdc' bus='scsi'/>
	I0617 11:00:46.127093  130544 main.go:141] libmachine: (ha-064080-m02)       <readonly/>
	I0617 11:00:46.127107  130544 main.go:141] libmachine: (ha-064080-m02)     </disk>
	I0617 11:00:46.127136  130544 main.go:141] libmachine: (ha-064080-m02)     <disk type='file' device='disk'>
	I0617 11:00:46.127163  130544 main.go:141] libmachine: (ha-064080-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0617 11:00:46.127193  130544 main.go:141] libmachine: (ha-064080-m02)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/ha-064080-m02.rawdisk'/>
	I0617 11:00:46.127214  130544 main.go:141] libmachine: (ha-064080-m02)       <target dev='hda' bus='virtio'/>
	I0617 11:00:46.127227  130544 main.go:141] libmachine: (ha-064080-m02)     </disk>
	I0617 11:00:46.127238  130544 main.go:141] libmachine: (ha-064080-m02)     <interface type='network'>
	I0617 11:00:46.127251  130544 main.go:141] libmachine: (ha-064080-m02)       <source network='mk-ha-064080'/>
	I0617 11:00:46.127259  130544 main.go:141] libmachine: (ha-064080-m02)       <model type='virtio'/>
	I0617 11:00:46.127270  130544 main.go:141] libmachine: (ha-064080-m02)     </interface>
	I0617 11:00:46.127281  130544 main.go:141] libmachine: (ha-064080-m02)     <interface type='network'>
	I0617 11:00:46.127296  130544 main.go:141] libmachine: (ha-064080-m02)       <source network='default'/>
	I0617 11:00:46.127309  130544 main.go:141] libmachine: (ha-064080-m02)       <model type='virtio'/>
	I0617 11:00:46.127321  130544 main.go:141] libmachine: (ha-064080-m02)     </interface>
	I0617 11:00:46.127331  130544 main.go:141] libmachine: (ha-064080-m02)     <serial type='pty'>
	I0617 11:00:46.127340  130544 main.go:141] libmachine: (ha-064080-m02)       <target port='0'/>
	I0617 11:00:46.127350  130544 main.go:141] libmachine: (ha-064080-m02)     </serial>
	I0617 11:00:46.127366  130544 main.go:141] libmachine: (ha-064080-m02)     <console type='pty'>
	I0617 11:00:46.127381  130544 main.go:141] libmachine: (ha-064080-m02)       <target type='serial' port='0'/>
	I0617 11:00:46.127393  130544 main.go:141] libmachine: (ha-064080-m02)     </console>
	I0617 11:00:46.127403  130544 main.go:141] libmachine: (ha-064080-m02)     <rng model='virtio'>
	I0617 11:00:46.127416  130544 main.go:141] libmachine: (ha-064080-m02)       <backend model='random'>/dev/random</backend>
	I0617 11:00:46.127425  130544 main.go:141] libmachine: (ha-064080-m02)     </rng>
	I0617 11:00:46.127433  130544 main.go:141] libmachine: (ha-064080-m02)     
	I0617 11:00:46.127436  130544 main.go:141] libmachine: (ha-064080-m02)     
	I0617 11:00:46.127441  130544 main.go:141] libmachine: (ha-064080-m02)   </devices>
	I0617 11:00:46.127446  130544 main.go:141] libmachine: (ha-064080-m02) </domain>
	I0617 11:00:46.127475  130544 main.go:141] libmachine: (ha-064080-m02) 
	I0617 11:00:46.134010  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:9a:bd:a4 in network default
	I0617 11:00:46.134480  130544 main.go:141] libmachine: (ha-064080-m02) Ensuring networks are active...
	I0617 11:00:46.134504  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:46.135144  130544 main.go:141] libmachine: (ha-064080-m02) Ensuring network default is active
	I0617 11:00:46.135468  130544 main.go:141] libmachine: (ha-064080-m02) Ensuring network mk-ha-064080 is active
	I0617 11:00:46.135833  130544 main.go:141] libmachine: (ha-064080-m02) Getting domain xml...
	I0617 11:00:46.136486  130544 main.go:141] libmachine: (ha-064080-m02) Creating domain...
	I0617 11:00:47.343471  130544 main.go:141] libmachine: (ha-064080-m02) Waiting to get IP...
	I0617 11:00:47.344979  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:47.345425  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:47.345457  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:47.345408  130923 retry.go:31] will retry after 211.785298ms: waiting for machine to come up
	I0617 11:00:47.559080  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:47.559629  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:47.559680  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:47.559594  130923 retry.go:31] will retry after 332.900963ms: waiting for machine to come up
	I0617 11:00:47.894147  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:47.894585  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:47.894612  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:47.894541  130923 retry.go:31] will retry after 315.785832ms: waiting for machine to come up
	I0617 11:00:48.212185  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:48.212649  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:48.212680  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:48.212600  130923 retry.go:31] will retry after 544.793078ms: waiting for machine to come up
	I0617 11:00:48.759569  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:48.760109  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:48.760162  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:48.760072  130923 retry.go:31] will retry after 602.98657ms: waiting for machine to come up
	I0617 11:00:49.365213  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:49.365714  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:49.365748  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:49.365660  130923 retry.go:31] will retry after 709.551079ms: waiting for machine to come up
	I0617 11:00:50.076458  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:50.076926  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:50.076954  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:50.076887  130923 retry.go:31] will retry after 830.396763ms: waiting for machine to come up
	I0617 11:00:50.909275  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:50.909649  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:50.909682  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:50.909593  130923 retry.go:31] will retry after 1.135405761s: waiting for machine to come up
	I0617 11:00:52.046935  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:52.047270  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:52.047309  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:52.047221  130923 retry.go:31] will retry after 1.708159376s: waiting for machine to come up
	I0617 11:00:53.757441  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:53.757884  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:53.757908  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:53.757833  130923 retry.go:31] will retry after 1.480812383s: waiting for machine to come up
	I0617 11:00:55.240499  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:55.240972  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:55.241002  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:55.240947  130923 retry.go:31] will retry after 2.538066125s: waiting for machine to come up
	I0617 11:00:57.781065  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:00:57.781429  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:00:57.781456  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:00:57.781378  130923 retry.go:31] will retry after 2.954010378s: waiting for machine to come up
	I0617 11:01:00.736714  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:00.737128  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:01:00.737150  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:01:00.737100  130923 retry.go:31] will retry after 4.208220574s: waiting for machine to come up
	I0617 11:01:04.950383  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:04.950775  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find current IP address of domain ha-064080-m02 in network mk-ha-064080
	I0617 11:01:04.950800  130544 main.go:141] libmachine: (ha-064080-m02) DBG | I0617 11:01:04.950741  130923 retry.go:31] will retry after 3.676530568s: waiting for machine to come up
	I0617 11:01:08.628596  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:08.629128  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has current primary IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:08.629167  130544 main.go:141] libmachine: (ha-064080-m02) Found IP for machine: 192.168.39.104
	I0617 11:01:08.629184  130544 main.go:141] libmachine: (ha-064080-m02) Reserving static IP address...
	I0617 11:01:08.629493  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find host DHCP lease matching {name: "ha-064080-m02", mac: "52:54:00:75:79:30", ip: "192.168.39.104"} in network mk-ha-064080
	I0617 11:01:08.699534  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Getting to WaitForSSH function...
	I0617 11:01:08.699569  130544 main.go:141] libmachine: (ha-064080-m02) Reserved static IP address: 192.168.39.104
	I0617 11:01:08.699589  130544 main.go:141] libmachine: (ha-064080-m02) Waiting for SSH to be available...
	I0617 11:01:08.702286  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:08.702662  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080
	I0617 11:01:08.702693  130544 main.go:141] libmachine: (ha-064080-m02) DBG | unable to find defined IP address of network mk-ha-064080 interface with MAC address 52:54:00:75:79:30
	I0617 11:01:08.702838  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Using SSH client type: external
	I0617 11:01:08.702869  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa (-rw-------)
	I0617 11:01:08.702903  130544 main.go:141] libmachine: (ha-064080-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 11:01:08.702915  130544 main.go:141] libmachine: (ha-064080-m02) DBG | About to run SSH command:
	I0617 11:01:08.702930  130544 main.go:141] libmachine: (ha-064080-m02) DBG | exit 0
	I0617 11:01:08.706245  130544 main.go:141] libmachine: (ha-064080-m02) DBG | SSH cmd err, output: exit status 255: 
	I0617 11:01:08.706268  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0617 11:01:08.706295  130544 main.go:141] libmachine: (ha-064080-m02) DBG | command : exit 0
	I0617 11:01:08.706305  130544 main.go:141] libmachine: (ha-064080-m02) DBG | err     : exit status 255
	I0617 11:01:08.706312  130544 main.go:141] libmachine: (ha-064080-m02) DBG | output  : 
	I0617 11:01:11.707159  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Getting to WaitForSSH function...
	I0617 11:01:11.709544  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:11.710037  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:11.710057  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:11.710174  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Using SSH client type: external
	I0617 11:01:11.710200  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa (-rw-------)
	I0617 11:01:11.710231  130544 main.go:141] libmachine: (ha-064080-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 11:01:11.710240  130544 main.go:141] libmachine: (ha-064080-m02) DBG | About to run SSH command:
	I0617 11:01:11.710248  130544 main.go:141] libmachine: (ha-064080-m02) DBG | exit 0
	I0617 11:01:11.831344  130544 main.go:141] libmachine: (ha-064080-m02) DBG | SSH cmd err, output: <nil>: 
	I0617 11:01:11.831592  130544 main.go:141] libmachine: (ha-064080-m02) KVM machine creation complete!
	I0617 11:01:11.831974  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetConfigRaw
	I0617 11:01:11.832615  130544 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:01:11.832818  130544 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:01:11.832975  130544 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0617 11:01:11.833027  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetState
	I0617 11:01:11.834431  130544 main.go:141] libmachine: Detecting operating system of created instance...
	I0617 11:01:11.834446  130544 main.go:141] libmachine: Waiting for SSH to be available...
	I0617 11:01:11.834452  130544 main.go:141] libmachine: Getting to WaitForSSH function...
	I0617 11:01:11.834459  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:11.836666  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:11.837128  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:11.837161  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:11.837305  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:11.837476  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:11.837635  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:11.837782  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:11.837945  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:01:11.838206  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0617 11:01:11.838221  130544 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0617 11:01:11.934443  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:01:11.934468  130544 main.go:141] libmachine: Detecting the provisioner...
	I0617 11:01:11.934476  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:11.937070  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:11.937416  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:11.937450  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:11.937604  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:11.937825  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:11.937985  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:11.938153  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:11.938344  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:01:11.938538  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0617 11:01:11.938554  130544 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0617 11:01:12.035996  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0617 11:01:12.036087  130544 main.go:141] libmachine: found compatible host: buildroot
	I0617 11:01:12.036098  130544 main.go:141] libmachine: Provisioning with buildroot...
	I0617 11:01:12.036106  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetMachineName
	I0617 11:01:12.036331  130544 buildroot.go:166] provisioning hostname "ha-064080-m02"
	I0617 11:01:12.036356  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetMachineName
	I0617 11:01:12.036541  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:12.039490  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.039870  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.039901  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.040009  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:12.040196  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.040368  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.040518  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:12.040743  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:01:12.040962  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0617 11:01:12.040982  130544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-064080-m02 && echo "ha-064080-m02" | sudo tee /etc/hostname
	I0617 11:01:12.153747  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-064080-m02
	
	I0617 11:01:12.153775  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:12.156496  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.156918  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.156944  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.157239  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:12.157452  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.157642  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.157882  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:12.158112  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:01:12.158294  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0617 11:01:12.158311  130544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-064080-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-064080-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-064080-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 11:01:12.264728  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:01:12.264762  130544 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 11:01:12.264793  130544 buildroot.go:174] setting up certificates
	I0617 11:01:12.264811  130544 provision.go:84] configureAuth start
	I0617 11:01:12.264833  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetMachineName
	I0617 11:01:12.265115  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetIP
	I0617 11:01:12.267534  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.267922  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.267950  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.268113  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:12.270296  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.270630  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.270653  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.270793  130544 provision.go:143] copyHostCerts
	I0617 11:01:12.270835  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:01:12.270871  130544 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 11:01:12.270882  130544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:01:12.270959  130544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 11:01:12.271062  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:01:12.271095  130544 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 11:01:12.271105  130544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:01:12.271143  130544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 11:01:12.271198  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:01:12.271222  130544 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 11:01:12.271231  130544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:01:12.271263  130544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 11:01:12.271322  130544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.ha-064080-m02 san=[127.0.0.1 192.168.39.104 ha-064080-m02 localhost minikube]
	I0617 11:01:12.322631  130544 provision.go:177] copyRemoteCerts
	I0617 11:01:12.322699  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 11:01:12.322736  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:12.325071  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.325369  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.325399  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.325596  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:12.325795  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.325976  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:12.326145  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa Username:docker}
	I0617 11:01:12.405165  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0617 11:01:12.405239  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 11:01:12.429072  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0617 11:01:12.429134  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0617 11:01:12.451851  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0617 11:01:12.451902  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 11:01:12.474910  130544 provision.go:87] duration metric: took 210.080891ms to configureAuth
	I0617 11:01:12.474942  130544 buildroot.go:189] setting minikube options for container-runtime
	I0617 11:01:12.475119  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:01:12.475196  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:12.477636  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.477975  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.478001  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.478149  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:12.478369  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.478588  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.478723  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:12.478876  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:01:12.479095  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0617 11:01:12.479110  130544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 11:01:12.742926  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 11:01:12.742957  130544 main.go:141] libmachine: Checking connection to Docker...
	I0617 11:01:12.742967  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetURL
	I0617 11:01:12.744212  130544 main.go:141] libmachine: (ha-064080-m02) DBG | Using libvirt version 6000000
	I0617 11:01:12.746412  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.746780  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.746815  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.746961  130544 main.go:141] libmachine: Docker is up and running!
	I0617 11:01:12.746979  130544 main.go:141] libmachine: Reticulating splines...
	I0617 11:01:12.746988  130544 client.go:171] duration metric: took 27.216016787s to LocalClient.Create
	I0617 11:01:12.747011  130544 start.go:167] duration metric: took 27.216080027s to libmachine.API.Create "ha-064080"
	I0617 11:01:12.747021  130544 start.go:293] postStartSetup for "ha-064080-m02" (driver="kvm2")
	I0617 11:01:12.747030  130544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 11:01:12.747046  130544 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:01:12.747315  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 11:01:12.747356  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:12.749500  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.749828  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.749865  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.750010  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:12.750229  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.750391  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:12.750540  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa Username:docker}
	I0617 11:01:12.829197  130544 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 11:01:12.833560  130544 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 11:01:12.833587  130544 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 11:01:12.833660  130544 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 11:01:12.833751  130544 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 11:01:12.833763  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /etc/ssl/certs/1201742.pem
	I0617 11:01:12.833875  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 11:01:12.843269  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:01:12.868058  130544 start.go:296] duration metric: took 121.020777ms for postStartSetup
	I0617 11:01:12.868105  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetConfigRaw
	I0617 11:01:12.868660  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetIP
	I0617 11:01:12.871026  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.871346  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.871377  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.871613  130544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 11:01:12.871834  130544 start.go:128] duration metric: took 27.358698337s to createHost
	I0617 11:01:12.871858  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:12.874078  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.874413  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.874442  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.874559  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:12.874738  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.874886  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.875003  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:12.875149  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:01:12.875350  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0617 11:01:12.875365  130544 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 11:01:12.972337  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718622072.949961819
	
	I0617 11:01:12.972369  130544 fix.go:216] guest clock: 1718622072.949961819
	I0617 11:01:12.972379  130544 fix.go:229] Guest: 2024-06-17 11:01:12.949961819 +0000 UTC Remote: 2024-06-17 11:01:12.87184639 +0000 UTC m=+80.378765384 (delta=78.115429ms)
	I0617 11:01:12.972400  130544 fix.go:200] guest clock delta is within tolerance: 78.115429ms
	I0617 11:01:12.972406  130544 start.go:83] releasing machines lock for "ha-064080-m02", held for 27.459392322s
	I0617 11:01:12.972423  130544 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:01:12.972680  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetIP
	I0617 11:01:12.975076  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.975450  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.975503  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.977711  130544 out.go:177] * Found network options:
	I0617 11:01:12.979057  130544 out.go:177]   - NO_PROXY=192.168.39.134
	W0617 11:01:12.980315  130544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0617 11:01:12.980343  130544 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:01:12.980861  130544 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:01:12.981050  130544 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:01:12.981146  130544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 11:01:12.981198  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	W0617 11:01:12.981277  130544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0617 11:01:12.981383  130544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 11:01:12.981415  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:01:12.983937  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.984269  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.984294  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.984311  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.984426  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:12.984594  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.984727  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:12.984751  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:12.984761  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:12.984883  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:01:12.984965  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa Username:docker}
	I0617 11:01:12.985043  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:01:12.985168  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:01:12.985318  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa Username:docker}
	I0617 11:01:13.210941  130544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 11:01:13.217160  130544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 11:01:13.217221  130544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 11:01:13.236585  130544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 11:01:13.236606  130544 start.go:494] detecting cgroup driver to use...
	I0617 11:01:13.236663  130544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 11:01:13.255187  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 11:01:13.268562  130544 docker.go:217] disabling cri-docker service (if available) ...
	I0617 11:01:13.268610  130544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 11:01:13.281859  130544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 11:01:13.297344  130544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 11:01:13.427601  130544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 11:01:13.595295  130544 docker.go:233] disabling docker service ...
	I0617 11:01:13.595378  130544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 11:01:13.610594  130544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 11:01:13.624171  130544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 11:01:13.751731  130544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 11:01:13.868028  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 11:01:13.881912  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 11:01:13.900611  130544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 11:01:13.900662  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:01:13.910803  130544 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 11:01:13.910876  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:01:13.921361  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:01:13.931484  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:01:13.941687  130544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 11:01:13.952131  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:01:13.962186  130544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:01:13.978500  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:01:13.989346  130544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 11:01:13.999104  130544 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 11:01:13.999158  130544 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 11:01:14.013279  130544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 11:01:14.022992  130544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:01:14.132078  130544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 11:01:14.265716  130544 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 11:01:14.265803  130544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 11:01:14.270597  130544 start.go:562] Will wait 60s for crictl version
	I0617 11:01:14.270646  130544 ssh_runner.go:195] Run: which crictl
	I0617 11:01:14.274526  130544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 11:01:14.313924  130544 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 11:01:14.313999  130544 ssh_runner.go:195] Run: crio --version
	I0617 11:01:14.340661  130544 ssh_runner.go:195] Run: crio --version
	I0617 11:01:14.372490  130544 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 11:01:14.373933  130544 out.go:177]   - env NO_PROXY=192.168.39.134
	I0617 11:01:14.375027  130544 main.go:141] libmachine: (ha-064080-m02) Calling .GetIP
	I0617 11:01:14.377530  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:14.377863  130544 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:01:00 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:01:14.377888  130544 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:01:14.378138  130544 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 11:01:14.382101  130544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:01:14.393964  130544 mustload.go:65] Loading cluster: ha-064080
	I0617 11:01:14.394151  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:01:14.394480  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:01:14.394526  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:01:14.408962  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38843
	I0617 11:01:14.409353  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:01:14.409807  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:01:14.409829  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:01:14.410147  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:01:14.410332  130544 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:01:14.411927  130544 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:01:14.412241  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:01:14.412285  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:01:14.426252  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40467
	I0617 11:01:14.426591  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:01:14.427054  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:01:14.427078  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:01:14.427356  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:01:14.427573  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:01:14.427718  130544 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080 for IP: 192.168.39.104
	I0617 11:01:14.427729  130544 certs.go:194] generating shared ca certs ...
	I0617 11:01:14.427741  130544 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:01:14.427901  130544 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 11:01:14.427963  130544 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 11:01:14.427977  130544 certs.go:256] generating profile certs ...
	I0617 11:01:14.428078  130544 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.key
	I0617 11:01:14.428104  130544 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.18341ce7
	I0617 11:01:14.428118  130544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.18341ce7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.134 192.168.39.104 192.168.39.254]
	I0617 11:01:14.526426  130544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.18341ce7 ...
	I0617 11:01:14.526455  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.18341ce7: {Name:mk3de114e69e7b0d34c18a1c37ebb9ee23768745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:01:14.526638  130544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.18341ce7 ...
	I0617 11:01:14.526655  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.18341ce7: {Name:mk8ac56e4ffc8e71aee80985cf9f1ec72c32422f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:01:14.526748  130544 certs.go:381] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.18341ce7 -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt
	I0617 11:01:14.526913  130544 certs.go:385] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.18341ce7 -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key
	I0617 11:01:14.527103  130544 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key
	I0617 11:01:14.527122  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0617 11:01:14.527138  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0617 11:01:14.527163  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0617 11:01:14.527181  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0617 11:01:14.527201  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0617 11:01:14.527214  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0617 11:01:14.527233  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0617 11:01:14.527250  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0617 11:01:14.527315  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 11:01:14.527356  130544 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 11:01:14.527370  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 11:01:14.527520  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 11:01:14.527588  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 11:01:14.527624  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 11:01:14.527689  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:01:14.527737  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:01:14.527758  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem -> /usr/share/ca-certificates/120174.pem
	I0617 11:01:14.527774  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /usr/share/ca-certificates/1201742.pem
	I0617 11:01:14.527815  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:01:14.530883  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:01:14.531342  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:01:14.531362  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:01:14.531548  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:01:14.531742  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:01:14.531888  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:01:14.532007  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:01:14.603746  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0617 11:01:14.609215  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0617 11:01:14.620177  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0617 11:01:14.624790  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0617 11:01:14.634384  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0617 11:01:14.638567  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0617 11:01:14.648091  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0617 11:01:14.651955  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0617 11:01:14.661478  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0617 11:01:14.665404  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0617 11:01:14.674676  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0617 11:01:14.683789  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0617 11:01:14.695560  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 11:01:14.722416  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 11:01:14.745647  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 11:01:14.771449  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 11:01:14.794474  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0617 11:01:14.817946  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0617 11:01:14.840589  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 11:01:14.863739  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 11:01:14.886316  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 11:01:14.910114  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 11:01:14.932744  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 11:01:14.955402  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0617 11:01:14.971761  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0617 11:01:14.989783  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0617 11:01:15.005936  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0617 11:01:15.022929  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0617 11:01:15.039950  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0617 11:01:15.058374  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0617 11:01:15.075468  130544 ssh_runner.go:195] Run: openssl version
	I0617 11:01:15.081047  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 11:01:15.091249  130544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:01:15.095674  130544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:01:15.095716  130544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:01:15.101361  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 11:01:15.111367  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 11:01:15.122010  130544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 11:01:15.126600  130544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 11:01:15.126649  130544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 11:01:15.132424  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 11:01:15.142882  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 11:01:15.153571  130544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 11:01:15.158084  130544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 11:01:15.158119  130544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 11:01:15.163773  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 11:01:15.174238  130544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 11:01:15.178293  130544 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0617 11:01:15.178370  130544 kubeadm.go:928] updating node {m02 192.168.39.104 8443 v1.30.1 crio true true} ...
	I0617 11:01:15.178466  130544 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-064080-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 11:01:15.178493  130544 kube-vip.go:115] generating kube-vip config ...
	I0617 11:01:15.178528  130544 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0617 11:01:15.193481  130544 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0617 11:01:15.193532  130544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0617 11:01:15.193576  130544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 11:01:15.204861  130544 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0617 11:01:15.204904  130544 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0617 11:01:15.216091  130544 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0617 11:01:15.216108  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0617 11:01:15.216188  130544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0617 11:01:15.216218  130544 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0617 11:01:15.216218  130544 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0617 11:01:15.220541  130544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0617 11:01:15.220564  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0617 11:01:15.757925  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0617 11:01:15.758016  130544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0617 11:01:15.763146  130544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0617 11:01:15.763184  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0617 11:01:21.006629  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:01:21.021935  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0617 11:01:21.022060  130544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0617 11:01:21.026450  130544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0617 11:01:21.026496  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0617 11:01:21.430117  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0617 11:01:21.439567  130544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0617 11:01:21.456842  130544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 11:01:21.473424  130544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0617 11:01:21.490334  130544 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0617 11:01:21.494244  130544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:01:21.506424  130544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:01:21.635119  130544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:01:21.651914  130544 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:01:21.652339  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:01:21.652381  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:01:21.668322  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43219
	I0617 11:01:21.668790  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:01:21.669282  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:01:21.669306  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:01:21.669672  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:01:21.669891  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:01:21.670051  130544 start.go:316] joinCluster: &{Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:01:21.670149  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0617 11:01:21.670173  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:01:21.672980  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:01:21.673341  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:01:21.673371  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:01:21.673535  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:01:21.673723  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:01:21.673910  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:01:21.674067  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:01:21.833488  130544 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:01:21.833555  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qrrepk.1y0r7o63mtidua42 --discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-064080-m02 --control-plane --apiserver-advertise-address=192.168.39.104 --apiserver-bind-port=8443"
	I0617 11:01:44.891838  130544 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qrrepk.1y0r7o63mtidua42 --discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-064080-m02 --control-plane --apiserver-advertise-address=192.168.39.104 --apiserver-bind-port=8443": (23.058250637s)
	I0617 11:01:44.891872  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0617 11:01:45.378267  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-064080-m02 minikube.k8s.io/updated_at=2024_06_17T11_01_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6 minikube.k8s.io/name=ha-064080 minikube.k8s.io/primary=false
	I0617 11:01:45.495107  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-064080-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0617 11:01:45.642232  130544 start.go:318] duration metric: took 23.972173706s to joinCluster
	I0617 11:01:45.642385  130544 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:01:45.643999  130544 out.go:177] * Verifying Kubernetes components...
	I0617 11:01:45.642586  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:01:45.645299  130544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:01:45.881564  130544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:01:45.948904  130544 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:01:45.949173  130544 kapi.go:59] client config for ha-064080: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.crt", KeyFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.key", CAFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfaf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0617 11:01:45.949252  130544 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.134:8443
	I0617 11:01:45.949520  130544 node_ready.go:35] waiting up to 6m0s for node "ha-064080-m02" to be "Ready" ...
	I0617 11:01:45.949628  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:45.949640  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:45.949652  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:45.949660  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:45.970876  130544 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0617 11:01:46.449797  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:46.449828  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:46.449840  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:46.449848  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:46.453997  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:01:46.950198  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:46.950219  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:46.950227  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:46.950231  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:46.953940  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:47.449888  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:47.449917  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:47.449929  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:47.449935  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:47.465655  130544 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0617 11:01:47.949969  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:47.949988  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:47.949996  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:47.950000  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:47.953535  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:47.954377  130544 node_ready.go:53] node "ha-064080-m02" has status "Ready":"False"
	I0617 11:01:48.450435  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:48.450458  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:48.450466  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:48.450470  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:48.454062  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:48.949824  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:48.949850  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:48.949860  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:48.949865  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:48.953121  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:49.450048  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:49.450072  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.450080  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.450085  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.453220  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:49.453993  130544 node_ready.go:49] node "ha-064080-m02" has status "Ready":"True"
	I0617 11:01:49.454018  130544 node_ready.go:38] duration metric: took 3.504478677s for node "ha-064080-m02" to be "Ready" ...
	I0617 11:01:49.454028  130544 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 11:01:49.454094  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:01:49.454106  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.454113  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.454116  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.458466  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:01:49.465298  130544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xbhnm" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:49.465386  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xbhnm
	I0617 11:01:49.465391  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.465399  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.465408  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.468371  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:49.469183  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:01:49.469199  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.469206  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.469210  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.471620  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:49.472293  130544 pod_ready.go:92] pod "coredns-7db6d8ff4d-xbhnm" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:49.472319  130544 pod_ready.go:81] duration metric: took 6.995145ms for pod "coredns-7db6d8ff4d-xbhnm" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:49.472332  130544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zv99k" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:49.472402  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zv99k
	I0617 11:01:49.472414  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.472423  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.472429  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.474934  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:49.475790  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:01:49.475816  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.475823  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.475827  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.478667  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:49.479182  130544 pod_ready.go:92] pod "coredns-7db6d8ff4d-zv99k" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:49.479195  130544 pod_ready.go:81] duration metric: took 6.852553ms for pod "coredns-7db6d8ff4d-zv99k" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:49.479203  130544 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:49.479273  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080
	I0617 11:01:49.479278  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.479289  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.479294  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.482883  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:49.483489  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:01:49.483507  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.483518  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.483525  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.486394  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:49.487253  130544 pod_ready.go:92] pod "etcd-ha-064080" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:49.487268  130544 pod_ready.go:81] duration metric: took 8.0594ms for pod "etcd-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:49.487276  130544 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:49.487321  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:49.487328  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.487335  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.487344  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.489651  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:49.490135  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:49.490148  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.490155  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.490160  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.492800  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:49.987630  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:49.987655  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.987663  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.987668  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.991042  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:49.991658  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:49.991674  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:49.991721  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:49.991730  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:49.994341  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:50.487857  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:50.487885  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:50.487897  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:50.487901  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:50.491419  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:50.492197  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:50.492216  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:50.492224  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:50.492230  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:50.494940  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:50.988424  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:50.988448  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:50.988456  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:50.988461  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:50.992061  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:50.992789  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:50.992806  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:50.992814  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:50.992821  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:50.995445  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:51.488462  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:51.488486  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:51.488494  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:51.488498  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:51.492089  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:51.492686  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:51.492704  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:51.492715  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:51.492720  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:51.495342  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:51.495918  130544 pod_ready.go:102] pod "etcd-ha-064080-m02" in "kube-system" namespace has status "Ready":"False"
	I0617 11:01:51.987882  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:51.987907  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:51.987916  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:51.987920  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:51.991820  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:51.992597  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:51.992617  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:51.992628  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:51.992635  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:51.995407  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:52.487478  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:52.487502  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:52.487510  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:52.487515  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:52.491082  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:52.491647  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:52.491664  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:52.491674  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:52.491681  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:52.494596  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:52.988485  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:52.988509  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:52.988516  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:52.988519  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:52.991880  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:52.992521  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:52.992539  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:52.992547  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:52.992551  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:52.995561  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:53.487783  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:53.487932  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:53.487961  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:53.487974  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:53.495150  130544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0617 11:01:53.495895  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:53.495915  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:53.495926  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:53.495931  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:53.498410  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:53.499001  130544 pod_ready.go:102] pod "etcd-ha-064080-m02" in "kube-system" namespace has status "Ready":"False"
	I0617 11:01:53.988489  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:53.988514  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:53.988522  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:53.988525  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:53.991851  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:53.992680  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:53.992695  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:53.992702  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:53.992705  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:53.995550  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:54.487559  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:54.487588  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:54.487600  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:54.487606  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:54.491555  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:54.492307  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:54.492326  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:54.492338  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:54.492342  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:54.495247  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:54.988333  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:54.988356  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:54.988364  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:54.988368  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:54.992300  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:54.992997  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:54.993014  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:54.993023  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:54.993028  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:54.996050  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:55.488092  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:55.488123  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:55.488134  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:55.488141  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:55.492093  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:55.492821  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:55.492836  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:55.492842  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:55.492846  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:55.495405  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:55.988039  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:55.988063  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:55.988071  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:55.988079  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:55.991389  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:55.992039  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:55.992067  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:55.992074  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:55.992078  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:55.994885  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:55.995526  130544 pod_ready.go:102] pod "etcd-ha-064080-m02" in "kube-system" namespace has status "Ready":"False"
	I0617 11:01:56.487639  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:56.487665  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:56.487673  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:56.487677  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:56.491096  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:56.491968  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:56.491983  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:56.491990  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:56.491995  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:56.494815  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:56.988277  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:01:56.988303  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:56.988314  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:56.988319  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:56.991887  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:56.992476  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:56.992497  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:56.992505  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:56.992509  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:56.995294  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:56.995933  130544 pod_ready.go:92] pod "etcd-ha-064080-m02" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:56.995955  130544 pod_ready.go:81] duration metric: took 7.508673118s for pod "etcd-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:56.995969  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:56.996021  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-064080
	I0617 11:01:56.996029  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:56.996036  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:56.996039  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:56.998638  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:56.999493  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:01:56.999511  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:56.999522  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:56.999528  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.002100  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:57.002672  130544 pod_ready.go:92] pod "kube-apiserver-ha-064080" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:57.002693  130544 pod_ready.go:81] duration metric: took 6.717759ms for pod "kube-apiserver-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:57.002702  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:57.002758  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-064080-m02
	I0617 11:01:57.002765  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.002774  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.002778  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.005183  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:57.005952  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:57.005968  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.005977  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.005982  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.008293  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:57.503432  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-064080-m02
	I0617 11:01:57.503467  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.503479  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.503488  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.506159  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:57.506738  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:57.506751  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.506758  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.506761  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.509582  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:57.510181  130544 pod_ready.go:92] pod "kube-apiserver-ha-064080-m02" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:57.510198  130544 pod_ready.go:81] duration metric: took 507.48767ms for pod "kube-apiserver-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:57.510207  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:57.510270  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-064080
	I0617 11:01:57.510278  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.510285  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.510289  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.512876  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:57.513789  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:01:57.513803  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.513811  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.513816  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.516015  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:57.516510  130544 pod_ready.go:92] pod "kube-controller-manager-ha-064080" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:57.516523  130544 pod_ready.go:81] duration metric: took 6.310329ms for pod "kube-controller-manager-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:57.516531  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:57.516588  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-064080-m02
	I0617 11:01:57.516596  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.516603  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.516607  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.519029  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:57.519866  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:57.519882  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.519888  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.519893  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.522012  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:57.522722  130544 pod_ready.go:92] pod "kube-controller-manager-ha-064080-m02" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:57.522737  130544 pod_ready.go:81] duration metric: took 6.199889ms for pod "kube-controller-manager-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:57.522745  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dd48x" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:57.650378  130544 request.go:629] Waited for 127.57795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dd48x
	I0617 11:01:57.650468  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dd48x
	I0617 11:01:57.650481  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.650492  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.650501  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.653664  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:57.850755  130544 request.go:629] Waited for 196.379696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:01:57.850850  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:01:57.850858  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:57.850868  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:57.850876  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:57.854153  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:57.854713  130544 pod_ready.go:92] pod "kube-proxy-dd48x" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:57.854737  130544 pod_ready.go:81] duration metric: took 331.985119ms for pod "kube-proxy-dd48x" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:57.854751  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l55dg" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:58.050889  130544 request.go:629] Waited for 196.050442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l55dg
	I0617 11:01:58.050991  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l55dg
	I0617 11:01:58.051002  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:58.051009  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:58.051016  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:58.054425  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:58.250603  130544 request.go:629] Waited for 195.380006ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:58.250663  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:01:58.250668  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:58.250675  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:58.250679  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:58.254094  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:58.256187  130544 pod_ready.go:92] pod "kube-proxy-l55dg" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:58.256218  130544 pod_ready.go:81] duration metric: took 401.459211ms for pod "kube-proxy-l55dg" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:58.256233  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:58.450176  130544 request.go:629] Waited for 193.855201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-064080
	I0617 11:01:58.450258  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-064080
	I0617 11:01:58.450263  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:58.450271  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:58.450278  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:58.453333  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:58.650293  130544 request.go:629] Waited for 196.2939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:01:58.650354  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:01:58.650359  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:58.650368  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:58.650374  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:58.653268  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:58.653653  130544 pod_ready.go:92] pod "kube-scheduler-ha-064080" in "kube-system" namespace has status "Ready":"True"
	I0617 11:01:58.653671  130544 pod_ready.go:81] duration metric: took 397.430801ms for pod "kube-scheduler-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:01:58.653683  130544 pod_ready.go:38] duration metric: took 9.199642443s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 11:01:58.653706  130544 api_server.go:52] waiting for apiserver process to appear ...
	I0617 11:01:58.653760  130544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:01:58.670148  130544 api_server.go:72] duration metric: took 13.027718259s to wait for apiserver process to appear ...
	I0617 11:01:58.670177  130544 api_server.go:88] waiting for apiserver healthz status ...
	I0617 11:01:58.670201  130544 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0617 11:01:58.674870  130544 api_server.go:279] https://192.168.39.134:8443/healthz returned 200:
	ok
	I0617 11:01:58.674927  130544 round_trippers.go:463] GET https://192.168.39.134:8443/version
	I0617 11:01:58.674934  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:58.674941  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:58.674947  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:58.676006  130544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0617 11:01:58.676183  130544 api_server.go:141] control plane version: v1.30.1
	I0617 11:01:58.676202  130544 api_server.go:131] duration metric: took 6.019302ms to wait for apiserver health ...
	I0617 11:01:58.676209  130544 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 11:01:58.850677  130544 request.go:629] Waited for 174.389993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:01:58.850742  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:01:58.850748  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:58.850755  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:58.850759  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:58.859894  130544 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0617 11:01:58.865322  130544 system_pods.go:59] 17 kube-system pods found
	I0617 11:01:58.865352  130544 system_pods.go:61] "coredns-7db6d8ff4d-xbhnm" [be37a6ec-2a49-4a56-b8a3-0da865edb05d] Running
	I0617 11:01:58.865357  130544 system_pods.go:61] "coredns-7db6d8ff4d-zv99k" [c2453fd4-894d-4212-bc48-1803e28ddba8] Running
	I0617 11:01:58.865361  130544 system_pods.go:61] "etcd-ha-064080" [f7a1e80e-8ebc-496b-8919-ebf99a8dd4b4] Running
	I0617 11:01:58.865364  130544 system_pods.go:61] "etcd-ha-064080-m02" [7de6c88f-a0b9-4fa3-b4aa-e964191aa4e5] Running
	I0617 11:01:58.865369  130544 system_pods.go:61] "kindnet-48mb7" [67422049-6637-4ca3-8bd1-2b47a265829d] Running
	I0617 11:01:58.865372  130544 system_pods.go:61] "kindnet-7cqp4" [f4671f39-ca07-4520-bc35-dce8e53318de] Running
	I0617 11:01:58.865375  130544 system_pods.go:61] "kube-apiserver-ha-064080" [fd326be1-2b78-41e8-9b57-138ffdadac71] Running
	I0617 11:01:58.865380  130544 system_pods.go:61] "kube-apiserver-ha-064080-m02" [74164e88-591d-490e-b4f9-1d8ea635cd2d] Running
	I0617 11:01:58.865383  130544 system_pods.go:61] "kube-controller-manager-ha-064080" [142a6154-fcbf-4d5d-a222-21d1b46720cb] Running
	I0617 11:01:58.865386  130544 system_pods.go:61] "kube-controller-manager-ha-064080-m02" [f096dd77-2f79-479e-bd06-b02c942200c6] Running
	I0617 11:01:58.865389  130544 system_pods.go:61] "kube-proxy-dd48x" [e1bd1d47-a8a5-47a5-820c-dd86f7ea7765] Running
	I0617 11:01:58.865392  130544 system_pods.go:61] "kube-proxy-l55dg" [1d827d6c-0432-4162-924c-d43b66b08c26] Running
	I0617 11:01:58.865395  130544 system_pods.go:61] "kube-scheduler-ha-064080" [f9e62714-7ec7-47a9-ab16-6afada18c6d8] Running
	I0617 11:01:58.865401  130544 system_pods.go:61] "kube-scheduler-ha-064080-m02" [ec804903-8a64-4a3d-8843-9d2ec21d7158] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 11:01:58.865407  130544 system_pods.go:61] "kube-vip-ha-064080" [6b9259b1-ee46-4493-ba10-dcb32da03f57] Running
	I0617 11:01:58.865412  130544 system_pods.go:61] "kube-vip-ha-064080-m02" [8a4ad095-97bf-4a1f-8579-9e6a564f24ed] Running
	I0617 11:01:58.865415  130544 system_pods.go:61] "storage-provisioner" [5646fca8-9ebc-47c1-b5ff-c87b0ed800d8] Running
	I0617 11:01:58.865421  130544 system_pods.go:74] duration metric: took 189.206494ms to wait for pod list to return data ...
	I0617 11:01:58.865430  130544 default_sa.go:34] waiting for default service account to be created ...
	I0617 11:01:59.050832  130544 request.go:629] Waited for 185.308848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/default/serviceaccounts
	I0617 11:01:59.050891  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/default/serviceaccounts
	I0617 11:01:59.050896  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:59.050904  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:59.050908  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:59.053737  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:01:59.053973  130544 default_sa.go:45] found service account: "default"
	I0617 11:01:59.053992  130544 default_sa.go:55] duration metric: took 188.556002ms for default service account to be created ...
	I0617 11:01:59.054000  130544 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 11:01:59.250510  130544 request.go:629] Waited for 196.416211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:01:59.250592  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:01:59.250601  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:59.250611  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:59.250617  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:59.255603  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:01:59.259996  130544 system_pods.go:86] 17 kube-system pods found
	I0617 11:01:59.260019  130544 system_pods.go:89] "coredns-7db6d8ff4d-xbhnm" [be37a6ec-2a49-4a56-b8a3-0da865edb05d] Running
	I0617 11:01:59.260025  130544 system_pods.go:89] "coredns-7db6d8ff4d-zv99k" [c2453fd4-894d-4212-bc48-1803e28ddba8] Running
	I0617 11:01:59.260029  130544 system_pods.go:89] "etcd-ha-064080" [f7a1e80e-8ebc-496b-8919-ebf99a8dd4b4] Running
	I0617 11:01:59.260033  130544 system_pods.go:89] "etcd-ha-064080-m02" [7de6c88f-a0b9-4fa3-b4aa-e964191aa4e5] Running
	I0617 11:01:59.260037  130544 system_pods.go:89] "kindnet-48mb7" [67422049-6637-4ca3-8bd1-2b47a265829d] Running
	I0617 11:01:59.260041  130544 system_pods.go:89] "kindnet-7cqp4" [f4671f39-ca07-4520-bc35-dce8e53318de] Running
	I0617 11:01:59.260045  130544 system_pods.go:89] "kube-apiserver-ha-064080" [fd326be1-2b78-41e8-9b57-138ffdadac71] Running
	I0617 11:01:59.260049  130544 system_pods.go:89] "kube-apiserver-ha-064080-m02" [74164e88-591d-490e-b4f9-1d8ea635cd2d] Running
	I0617 11:01:59.260053  130544 system_pods.go:89] "kube-controller-manager-ha-064080" [142a6154-fcbf-4d5d-a222-21d1b46720cb] Running
	I0617 11:01:59.260058  130544 system_pods.go:89] "kube-controller-manager-ha-064080-m02" [f096dd77-2f79-479e-bd06-b02c942200c6] Running
	I0617 11:01:59.260062  130544 system_pods.go:89] "kube-proxy-dd48x" [e1bd1d47-a8a5-47a5-820c-dd86f7ea7765] Running
	I0617 11:01:59.260067  130544 system_pods.go:89] "kube-proxy-l55dg" [1d827d6c-0432-4162-924c-d43b66b08c26] Running
	I0617 11:01:59.260074  130544 system_pods.go:89] "kube-scheduler-ha-064080" [f9e62714-7ec7-47a9-ab16-6afada18c6d8] Running
	I0617 11:01:59.260085  130544 system_pods.go:89] "kube-scheduler-ha-064080-m02" [ec804903-8a64-4a3d-8843-9d2ec21d7158] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 11:01:59.260092  130544 system_pods.go:89] "kube-vip-ha-064080" [6b9259b1-ee46-4493-ba10-dcb32da03f57] Running
	I0617 11:01:59.260098  130544 system_pods.go:89] "kube-vip-ha-064080-m02" [8a4ad095-97bf-4a1f-8579-9e6a564f24ed] Running
	I0617 11:01:59.260102  130544 system_pods.go:89] "storage-provisioner" [5646fca8-9ebc-47c1-b5ff-c87b0ed800d8] Running
	I0617 11:01:59.260109  130544 system_pods.go:126] duration metric: took 206.102612ms to wait for k8s-apps to be running ...
	I0617 11:01:59.260118  130544 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 11:01:59.260160  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:01:59.276507  130544 system_svc.go:56] duration metric: took 16.376864ms WaitForService to wait for kubelet
	I0617 11:01:59.276538  130544 kubeadm.go:576] duration metric: took 13.634112303s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:01:59.276563  130544 node_conditions.go:102] verifying NodePressure condition ...
	I0617 11:01:59.450482  130544 request.go:629] Waited for 173.83515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes
	I0617 11:01:59.450553  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes
	I0617 11:01:59.450560  130544 round_trippers.go:469] Request Headers:
	I0617 11:01:59.450567  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:01:59.450577  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:01:59.454233  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:01:59.454924  130544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 11:01:59.454961  130544 node_conditions.go:123] node cpu capacity is 2
	I0617 11:01:59.454978  130544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 11:01:59.454983  130544 node_conditions.go:123] node cpu capacity is 2
	I0617 11:01:59.454989  130544 node_conditions.go:105] duration metric: took 178.4202ms to run NodePressure ...
	I0617 11:01:59.455005  130544 start.go:240] waiting for startup goroutines ...
	I0617 11:01:59.455037  130544 start.go:254] writing updated cluster config ...
	I0617 11:01:59.457035  130544 out.go:177] 
	I0617 11:01:59.458351  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:01:59.458437  130544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 11:01:59.459860  130544 out.go:177] * Starting "ha-064080-m03" control-plane node in "ha-064080" cluster
	I0617 11:01:59.460990  130544 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:01:59.461013  130544 cache.go:56] Caching tarball of preloaded images
	I0617 11:01:59.461124  130544 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:01:59.461137  130544 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 11:01:59.461218  130544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 11:01:59.461372  130544 start.go:360] acquireMachinesLock for ha-064080-m03: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:01:59.461415  130544 start.go:364] duration metric: took 23.722µs to acquireMachinesLock for "ha-064080-m03"
	I0617 11:01:59.461432  130544 start.go:93] Provisioning new machine with config: &{Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:01:59.461526  130544 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0617 11:01:59.462923  130544 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 11:01:59.463000  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:01:59.463046  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:01:59.478511  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35945
	I0617 11:01:59.478946  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:01:59.479469  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:01:59.479491  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:01:59.479876  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:01:59.480067  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetMachineName
	I0617 11:01:59.480259  130544 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:01:59.480435  130544 start.go:159] libmachine.API.Create for "ha-064080" (driver="kvm2")
	I0617 11:01:59.480463  130544 client.go:168] LocalClient.Create starting
	I0617 11:01:59.480498  130544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem
	I0617 11:01:59.480535  130544 main.go:141] libmachine: Decoding PEM data...
	I0617 11:01:59.480556  130544 main.go:141] libmachine: Parsing certificate...
	I0617 11:01:59.480634  130544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem
	I0617 11:01:59.480660  130544 main.go:141] libmachine: Decoding PEM data...
	I0617 11:01:59.480677  130544 main.go:141] libmachine: Parsing certificate...
	I0617 11:01:59.480702  130544 main.go:141] libmachine: Running pre-create checks...
	I0617 11:01:59.480713  130544 main.go:141] libmachine: (ha-064080-m03) Calling .PreCreateCheck
	I0617 11:01:59.480887  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetConfigRaw
	I0617 11:01:59.481280  130544 main.go:141] libmachine: Creating machine...
	I0617 11:01:59.481293  130544 main.go:141] libmachine: (ha-064080-m03) Calling .Create
	I0617 11:01:59.481419  130544 main.go:141] libmachine: (ha-064080-m03) Creating KVM machine...
	I0617 11:01:59.482671  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found existing default KVM network
	I0617 11:01:59.482871  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found existing private KVM network mk-ha-064080
	I0617 11:01:59.482981  130544 main.go:141] libmachine: (ha-064080-m03) Setting up store path in /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03 ...
	I0617 11:01:59.483003  130544 main.go:141] libmachine: (ha-064080-m03) Building disk image from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso
	I0617 11:01:59.483062  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:01:59.482961  131318 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:01:59.483160  130544 main.go:141] libmachine: (ha-064080-m03) Downloading /home/jenkins/minikube-integration/19084-112967/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0617 11:01:59.715675  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:01:59.715521  131318 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa...
	I0617 11:01:59.785679  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:01:59.785539  131318 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/ha-064080-m03.rawdisk...
	I0617 11:01:59.785721  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Writing magic tar header
	I0617 11:01:59.785769  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Writing SSH key tar header
	I0617 11:01:59.785805  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:01:59.785696  131318 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03 ...
	I0617 11:01:59.785828  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03
	I0617 11:01:59.785843  130544 main.go:141] libmachine: (ha-064080-m03) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03 (perms=drwx------)
	I0617 11:01:59.785851  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines
	I0617 11:01:59.785869  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:01:59.785877  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967
	I0617 11:01:59.785892  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0617 11:01:59.785904  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Checking permissions on dir: /home/jenkins
	I0617 11:01:59.785919  130544 main.go:141] libmachine: (ha-064080-m03) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines (perms=drwxr-xr-x)
	I0617 11:01:59.785931  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Checking permissions on dir: /home
	I0617 11:01:59.785946  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Skipping /home - not owner
	I0617 11:01:59.785963  130544 main.go:141] libmachine: (ha-064080-m03) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube (perms=drwxr-xr-x)
	I0617 11:01:59.785975  130544 main.go:141] libmachine: (ha-064080-m03) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967 (perms=drwxrwxr-x)
	I0617 11:01:59.785991  130544 main.go:141] libmachine: (ha-064080-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0617 11:01:59.786003  130544 main.go:141] libmachine: (ha-064080-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0617 11:01:59.786019  130544 main.go:141] libmachine: (ha-064080-m03) Creating domain...
	I0617 11:01:59.786903  130544 main.go:141] libmachine: (ha-064080-m03) define libvirt domain using xml: 
	I0617 11:01:59.786925  130544 main.go:141] libmachine: (ha-064080-m03) <domain type='kvm'>
	I0617 11:01:59.786935  130544 main.go:141] libmachine: (ha-064080-m03)   <name>ha-064080-m03</name>
	I0617 11:01:59.786948  130544 main.go:141] libmachine: (ha-064080-m03)   <memory unit='MiB'>2200</memory>
	I0617 11:01:59.786956  130544 main.go:141] libmachine: (ha-064080-m03)   <vcpu>2</vcpu>
	I0617 11:01:59.786962  130544 main.go:141] libmachine: (ha-064080-m03)   <features>
	I0617 11:01:59.786971  130544 main.go:141] libmachine: (ha-064080-m03)     <acpi/>
	I0617 11:01:59.786976  130544 main.go:141] libmachine: (ha-064080-m03)     <apic/>
	I0617 11:01:59.786986  130544 main.go:141] libmachine: (ha-064080-m03)     <pae/>
	I0617 11:01:59.786992  130544 main.go:141] libmachine: (ha-064080-m03)     
	I0617 11:01:59.787003  130544 main.go:141] libmachine: (ha-064080-m03)   </features>
	I0617 11:01:59.787009  130544 main.go:141] libmachine: (ha-064080-m03)   <cpu mode='host-passthrough'>
	I0617 11:01:59.787016  130544 main.go:141] libmachine: (ha-064080-m03)   
	I0617 11:01:59.787023  130544 main.go:141] libmachine: (ha-064080-m03)   </cpu>
	I0617 11:01:59.787053  130544 main.go:141] libmachine: (ha-064080-m03)   <os>
	I0617 11:01:59.787083  130544 main.go:141] libmachine: (ha-064080-m03)     <type>hvm</type>
	I0617 11:01:59.787093  130544 main.go:141] libmachine: (ha-064080-m03)     <boot dev='cdrom'/>
	I0617 11:01:59.787107  130544 main.go:141] libmachine: (ha-064080-m03)     <boot dev='hd'/>
	I0617 11:01:59.787118  130544 main.go:141] libmachine: (ha-064080-m03)     <bootmenu enable='no'/>
	I0617 11:01:59.787125  130544 main.go:141] libmachine: (ha-064080-m03)   </os>
	I0617 11:01:59.787132  130544 main.go:141] libmachine: (ha-064080-m03)   <devices>
	I0617 11:01:59.787138  130544 main.go:141] libmachine: (ha-064080-m03)     <disk type='file' device='cdrom'>
	I0617 11:01:59.787147  130544 main.go:141] libmachine: (ha-064080-m03)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/boot2docker.iso'/>
	I0617 11:01:59.787152  130544 main.go:141] libmachine: (ha-064080-m03)       <target dev='hdc' bus='scsi'/>
	I0617 11:01:59.787161  130544 main.go:141] libmachine: (ha-064080-m03)       <readonly/>
	I0617 11:01:59.787165  130544 main.go:141] libmachine: (ha-064080-m03)     </disk>
	I0617 11:01:59.787176  130544 main.go:141] libmachine: (ha-064080-m03)     <disk type='file' device='disk'>
	I0617 11:01:59.787185  130544 main.go:141] libmachine: (ha-064080-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0617 11:01:59.787200  130544 main.go:141] libmachine: (ha-064080-m03)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/ha-064080-m03.rawdisk'/>
	I0617 11:01:59.787212  130544 main.go:141] libmachine: (ha-064080-m03)       <target dev='hda' bus='virtio'/>
	I0617 11:01:59.787222  130544 main.go:141] libmachine: (ha-064080-m03)     </disk>
	I0617 11:01:59.787231  130544 main.go:141] libmachine: (ha-064080-m03)     <interface type='network'>
	I0617 11:01:59.787240  130544 main.go:141] libmachine: (ha-064080-m03)       <source network='mk-ha-064080'/>
	I0617 11:01:59.787254  130544 main.go:141] libmachine: (ha-064080-m03)       <model type='virtio'/>
	I0617 11:01:59.787273  130544 main.go:141] libmachine: (ha-064080-m03)     </interface>
	I0617 11:01:59.787286  130544 main.go:141] libmachine: (ha-064080-m03)     <interface type='network'>
	I0617 11:01:59.787297  130544 main.go:141] libmachine: (ha-064080-m03)       <source network='default'/>
	I0617 11:01:59.787306  130544 main.go:141] libmachine: (ha-064080-m03)       <model type='virtio'/>
	I0617 11:01:59.787316  130544 main.go:141] libmachine: (ha-064080-m03)     </interface>
	I0617 11:01:59.787336  130544 main.go:141] libmachine: (ha-064080-m03)     <serial type='pty'>
	I0617 11:01:59.787355  130544 main.go:141] libmachine: (ha-064080-m03)       <target port='0'/>
	I0617 11:01:59.787365  130544 main.go:141] libmachine: (ha-064080-m03)     </serial>
	I0617 11:01:59.787386  130544 main.go:141] libmachine: (ha-064080-m03)     <console type='pty'>
	I0617 11:01:59.787400  130544 main.go:141] libmachine: (ha-064080-m03)       <target type='serial' port='0'/>
	I0617 11:01:59.787410  130544 main.go:141] libmachine: (ha-064080-m03)     </console>
	I0617 11:01:59.787418  130544 main.go:141] libmachine: (ha-064080-m03)     <rng model='virtio'>
	I0617 11:01:59.787430  130544 main.go:141] libmachine: (ha-064080-m03)       <backend model='random'>/dev/random</backend>
	I0617 11:01:59.787477  130544 main.go:141] libmachine: (ha-064080-m03)     </rng>
	I0617 11:01:59.787501  130544 main.go:141] libmachine: (ha-064080-m03)     
	I0617 11:01:59.787515  130544 main.go:141] libmachine: (ha-064080-m03)     
	I0617 11:01:59.787526  130544 main.go:141] libmachine: (ha-064080-m03)   </devices>
	I0617 11:01:59.787539  130544 main.go:141] libmachine: (ha-064080-m03) </domain>
	I0617 11:01:59.787550  130544 main.go:141] libmachine: (ha-064080-m03) 
	I0617 11:01:59.793962  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:9d:11:91 in network default
	I0617 11:01:59.794641  130544 main.go:141] libmachine: (ha-064080-m03) Ensuring networks are active...
	I0617 11:01:59.794665  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:01:59.795430  130544 main.go:141] libmachine: (ha-064080-m03) Ensuring network default is active
	I0617 11:01:59.795789  130544 main.go:141] libmachine: (ha-064080-m03) Ensuring network mk-ha-064080 is active
	I0617 11:01:59.796164  130544 main.go:141] libmachine: (ha-064080-m03) Getting domain xml...
	I0617 11:01:59.796910  130544 main.go:141] libmachine: (ha-064080-m03) Creating domain...
	I0617 11:02:01.039485  130544 main.go:141] libmachine: (ha-064080-m03) Waiting to get IP...
	I0617 11:02:01.040173  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:01.040567  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:01.040617  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:01.040560  131318 retry.go:31] will retry after 256.954057ms: waiting for machine to come up
	I0617 11:02:01.299313  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:01.299735  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:01.299760  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:01.299698  131318 retry.go:31] will retry after 349.087473ms: waiting for machine to come up
	I0617 11:02:01.650272  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:01.650691  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:01.650718  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:01.650648  131318 retry.go:31] will retry after 430.560067ms: waiting for machine to come up
	I0617 11:02:02.083211  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:02.083690  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:02.083728  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:02.083658  131318 retry.go:31] will retry after 607.889522ms: waiting for machine to come up
	I0617 11:02:02.693338  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:02.693773  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:02.693807  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:02.693723  131318 retry.go:31] will retry after 468.818335ms: waiting for machine to come up
	I0617 11:02:03.164451  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:03.164847  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:03.164876  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:03.164787  131318 retry.go:31] will retry after 935.496879ms: waiting for machine to come up
	I0617 11:02:04.101800  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:04.102171  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:04.102201  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:04.102117  131318 retry.go:31] will retry after 1.166024389s: waiting for machine to come up
	I0617 11:02:05.269896  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:05.270443  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:05.270472  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:05.270400  131318 retry.go:31] will retry after 1.125834158s: waiting for machine to come up
	I0617 11:02:06.397857  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:06.398432  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:06.398461  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:06.398384  131318 retry.go:31] will retry after 1.40014932s: waiting for machine to come up
	I0617 11:02:07.800662  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:07.801238  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:07.801265  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:07.801142  131318 retry.go:31] will retry after 2.098669841s: waiting for machine to come up
	I0617 11:02:09.901171  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:09.901676  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:09.901708  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:09.901627  131318 retry.go:31] will retry after 2.799457249s: waiting for machine to come up
	I0617 11:02:12.704433  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:12.704852  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:12.704873  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:12.704820  131318 retry.go:31] will retry after 2.829077131s: waiting for machine to come up
	I0617 11:02:15.535995  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:15.536390  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:15.536412  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:15.536359  131318 retry.go:31] will retry after 2.775553712s: waiting for machine to come up
	I0617 11:02:18.314893  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:18.315231  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find current IP address of domain ha-064080-m03 in network mk-ha-064080
	I0617 11:02:18.315260  130544 main.go:141] libmachine: (ha-064080-m03) DBG | I0617 11:02:18.315207  131318 retry.go:31] will retry after 5.321724574s: waiting for machine to come up
	I0617 11:02:23.641110  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:23.641531  130544 main.go:141] libmachine: (ha-064080-m03) Found IP for machine: 192.168.39.168
	I0617 11:02:23.641560  130544 main.go:141] libmachine: (ha-064080-m03) Reserving static IP address...
	I0617 11:02:23.641577  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has current primary IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:23.642007  130544 main.go:141] libmachine: (ha-064080-m03) DBG | unable to find host DHCP lease matching {name: "ha-064080-m03", mac: "52:54:00:97:31:82", ip: "192.168.39.168"} in network mk-ha-064080
	I0617 11:02:23.718332  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Getting to WaitForSSH function...
	I0617 11:02:23.718366  130544 main.go:141] libmachine: (ha-064080-m03) Reserved static IP address: 192.168.39.168
	I0617 11:02:23.718417  130544 main.go:141] libmachine: (ha-064080-m03) Waiting for SSH to be available...
	I0617 11:02:23.720882  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:23.721268  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:minikube Clientid:01:52:54:00:97:31:82}
	I0617 11:02:23.721302  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:23.721524  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Using SSH client type: external
	I0617 11:02:23.721555  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa (-rw-------)
	I0617 11:02:23.721585  130544 main.go:141] libmachine: (ha-064080-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.168 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 11:02:23.721600  130544 main.go:141] libmachine: (ha-064080-m03) DBG | About to run SSH command:
	I0617 11:02:23.721618  130544 main.go:141] libmachine: (ha-064080-m03) DBG | exit 0
	I0617 11:02:23.843671  130544 main.go:141] libmachine: (ha-064080-m03) DBG | SSH cmd err, output: <nil>: 
	I0617 11:02:23.843974  130544 main.go:141] libmachine: (ha-064080-m03) KVM machine creation complete!
	I0617 11:02:23.844253  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetConfigRaw
	I0617 11:02:23.844765  130544 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:02:23.844966  130544 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:02:23.845164  130544 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0617 11:02:23.845179  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetState
	I0617 11:02:23.846418  130544 main.go:141] libmachine: Detecting operating system of created instance...
	I0617 11:02:23.846434  130544 main.go:141] libmachine: Waiting for SSH to be available...
	I0617 11:02:23.846442  130544 main.go:141] libmachine: Getting to WaitForSSH function...
	I0617 11:02:23.846451  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:23.848936  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:23.849347  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:23.849373  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:23.849587  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:23.849800  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:23.849973  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:23.850131  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:23.850290  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:02:23.850597  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0617 11:02:23.850616  130544 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0617 11:02:23.950947  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:02:23.950975  130544 main.go:141] libmachine: Detecting the provisioner...
	I0617 11:02:23.950983  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:23.954086  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:23.954502  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:23.954532  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:23.954701  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:23.954917  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:23.955121  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:23.955279  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:23.955439  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:02:23.955640  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0617 11:02:23.955653  130544 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0617 11:02:24.060089  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0617 11:02:24.060155  130544 main.go:141] libmachine: found compatible host: buildroot
	I0617 11:02:24.060169  130544 main.go:141] libmachine: Provisioning with buildroot...
	I0617 11:02:24.060183  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetMachineName
	I0617 11:02:24.060445  130544 buildroot.go:166] provisioning hostname "ha-064080-m03"
	I0617 11:02:24.060477  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetMachineName
	I0617 11:02:24.060699  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:24.063129  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.063498  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:24.063519  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.063664  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:24.063868  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:24.064049  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:24.064234  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:24.064423  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:02:24.064624  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0617 11:02:24.064637  130544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-064080-m03 && echo "ha-064080-m03" | sudo tee /etc/hostname
	I0617 11:02:24.187321  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-064080-m03
	
	I0617 11:02:24.187346  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:24.190117  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.190508  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:24.190530  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.190733  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:24.190979  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:24.191207  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:24.191385  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:24.191589  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:02:24.191816  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0617 11:02:24.191849  130544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-064080-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-064080-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-064080-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 11:02:24.306947  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:02:24.306985  130544 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 11:02:24.307006  130544 buildroot.go:174] setting up certificates
	I0617 11:02:24.307024  130544 provision.go:84] configureAuth start
	I0617 11:02:24.307035  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetMachineName
	I0617 11:02:24.307388  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetIP
	I0617 11:02:24.310096  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.310550  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:24.310599  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.310881  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:24.312970  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.313309  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:24.313334  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.313496  130544 provision.go:143] copyHostCerts
	I0617 11:02:24.313535  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:02:24.313575  130544 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 11:02:24.313587  130544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:02:24.313661  130544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 11:02:24.313757  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:02:24.313800  130544 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 11:02:24.313810  130544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:02:24.313852  130544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 11:02:24.313916  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:02:24.313942  130544 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 11:02:24.313951  130544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:02:24.313985  130544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 11:02:24.314053  130544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.ha-064080-m03 san=[127.0.0.1 192.168.39.168 ha-064080-m03 localhost minikube]
	I0617 11:02:24.765321  130544 provision.go:177] copyRemoteCerts
	I0617 11:02:24.765392  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 11:02:24.765426  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:24.768433  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.768875  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:24.768901  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.769113  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:24.769297  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:24.769463  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:24.769577  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa Username:docker}
	I0617 11:02:24.849664  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0617 11:02:24.849742  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 11:02:24.874547  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0617 11:02:24.874638  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0617 11:02:24.899270  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0617 11:02:24.899357  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0617 11:02:24.924418  130544 provision.go:87] duration metric: took 617.379218ms to configureAuth
	I0617 11:02:24.924452  130544 buildroot.go:189] setting minikube options for container-runtime
	I0617 11:02:24.924770  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:02:24.924879  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:24.927703  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.928104  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:24.928137  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:24.928224  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:24.928474  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:24.928634  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:24.928833  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:24.929030  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:02:24.929224  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0617 11:02:24.929245  130544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 11:02:25.200352  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 11:02:25.200386  130544 main.go:141] libmachine: Checking connection to Docker...
	I0617 11:02:25.200395  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetURL
	I0617 11:02:25.201530  130544 main.go:141] libmachine: (ha-064080-m03) DBG | Using libvirt version 6000000
	I0617 11:02:25.203830  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.204218  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:25.204249  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.204438  130544 main.go:141] libmachine: Docker is up and running!
	I0617 11:02:25.204458  130544 main.go:141] libmachine: Reticulating splines...
	I0617 11:02:25.204467  130544 client.go:171] duration metric: took 25.723991787s to LocalClient.Create
	I0617 11:02:25.204499  130544 start.go:167] duration metric: took 25.724065148s to libmachine.API.Create "ha-064080"
	I0617 11:02:25.204513  130544 start.go:293] postStartSetup for "ha-064080-m03" (driver="kvm2")
	I0617 11:02:25.204544  130544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 11:02:25.204569  130544 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:02:25.204850  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 11:02:25.204877  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:25.207140  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.207501  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:25.207528  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.207670  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:25.207859  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:25.208006  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:25.208126  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa Username:docker}
	I0617 11:02:25.289996  130544 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 11:02:25.294386  130544 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 11:02:25.294413  130544 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 11:02:25.294476  130544 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 11:02:25.294542  130544 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 11:02:25.294552  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /etc/ssl/certs/1201742.pem
	I0617 11:02:25.294632  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 11:02:25.303876  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:02:25.328687  130544 start.go:296] duration metric: took 124.142586ms for postStartSetup
	I0617 11:02:25.328741  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetConfigRaw
	I0617 11:02:25.329349  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetIP
	I0617 11:02:25.333130  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.333562  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:25.333593  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.333849  130544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 11:02:25.334050  130544 start.go:128] duration metric: took 25.872513014s to createHost
	I0617 11:02:25.334087  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:25.336268  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.336692  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:25.336720  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.336884  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:25.337070  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:25.337236  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:25.337374  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:25.337535  130544 main.go:141] libmachine: Using SSH client type: native
	I0617 11:02:25.337715  130544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0617 11:02:25.337726  130544 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 11:02:25.440100  130544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718622145.416396329
	
	I0617 11:02:25.440123  130544 fix.go:216] guest clock: 1718622145.416396329
	I0617 11:02:25.440130  130544 fix.go:229] Guest: 2024-06-17 11:02:25.416396329 +0000 UTC Remote: 2024-06-17 11:02:25.334063285 +0000 UTC m=+152.840982290 (delta=82.333044ms)
	I0617 11:02:25.440149  130544 fix.go:200] guest clock delta is within tolerance: 82.333044ms
	I0617 11:02:25.440157  130544 start.go:83] releasing machines lock for "ha-064080-m03", held for 25.978732098s
	I0617 11:02:25.440178  130544 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:02:25.440409  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetIP
	I0617 11:02:25.442842  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.443279  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:25.443309  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.445756  130544 out.go:177] * Found network options:
	I0617 11:02:25.447161  130544 out.go:177]   - NO_PROXY=192.168.39.134,192.168.39.104
	W0617 11:02:25.448497  130544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0617 11:02:25.448529  130544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0617 11:02:25.448549  130544 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:02:25.449107  130544 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:02:25.449284  130544 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:02:25.449371  130544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 11:02:25.449399  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	W0617 11:02:25.449510  130544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0617 11:02:25.449537  130544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0617 11:02:25.449593  130544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 11:02:25.449615  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:02:25.452286  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.452380  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.452664  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:25.452717  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.452748  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:25.452764  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:25.452802  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:25.453034  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:25.453058  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:02:25.453238  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:25.453318  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:02:25.453411  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa Username:docker}
	I0617 11:02:25.453494  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:02:25.453676  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa Username:docker}
	I0617 11:02:25.689292  130544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 11:02:25.695882  130544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 11:02:25.695961  130544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 11:02:25.712650  130544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 11:02:25.712675  130544 start.go:494] detecting cgroup driver to use...
	I0617 11:02:25.712739  130544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 11:02:25.730961  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 11:02:25.746533  130544 docker.go:217] disabling cri-docker service (if available) ...
	I0617 11:02:25.746583  130544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 11:02:25.760480  130544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 11:02:25.774935  130544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 11:02:25.906162  130544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 11:02:26.058886  130544 docker.go:233] disabling docker service ...
	I0617 11:02:26.058962  130544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 11:02:26.073999  130544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 11:02:26.086932  130544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 11:02:26.228781  130544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 11:02:26.348538  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 11:02:26.364179  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 11:02:26.383382  130544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 11:02:26.383443  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:02:26.394142  130544 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 11:02:26.394197  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:02:26.405621  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:02:26.416482  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:02:26.427107  130544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 11:02:26.437920  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:02:26.448561  130544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:02:26.466142  130544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:02:26.476726  130544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 11:02:26.486167  130544 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 11:02:26.486215  130544 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 11:02:26.500347  130544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 11:02:26.510948  130544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:02:26.633016  130544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 11:02:26.786888  130544 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 11:02:26.786968  130544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 11:02:26.791686  130544 start.go:562] Will wait 60s for crictl version
	I0617 11:02:26.791748  130544 ssh_runner.go:195] Run: which crictl
	I0617 11:02:26.795634  130544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 11:02:26.837840  130544 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 11:02:26.837922  130544 ssh_runner.go:195] Run: crio --version
	I0617 11:02:26.869330  130544 ssh_runner.go:195] Run: crio --version
	I0617 11:02:26.902388  130544 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 11:02:26.903806  130544 out.go:177]   - env NO_PROXY=192.168.39.134
	I0617 11:02:26.905120  130544 out.go:177]   - env NO_PROXY=192.168.39.134,192.168.39.104
	I0617 11:02:26.906328  130544 main.go:141] libmachine: (ha-064080-m03) Calling .GetIP
	I0617 11:02:26.908830  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:26.909161  130544 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:02:26.909192  130544 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:02:26.909393  130544 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 11:02:26.913602  130544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:02:26.928465  130544 mustload.go:65] Loading cluster: ha-064080
	I0617 11:02:26.928699  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:02:26.929046  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:02:26.929094  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:02:26.944875  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41773
	I0617 11:02:26.945277  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:02:26.945774  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:02:26.945806  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:02:26.946180  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:02:26.946406  130544 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:02:26.947952  130544 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:02:26.948308  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:02:26.948355  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:02:26.963836  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0617 11:02:26.964205  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:02:26.964637  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:02:26.964655  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:02:26.964999  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:02:26.965184  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:02:26.965336  130544 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080 for IP: 192.168.39.168
	I0617 11:02:26.965349  130544 certs.go:194] generating shared ca certs ...
	I0617 11:02:26.965367  130544 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:02:26.965509  130544 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 11:02:26.965569  130544 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 11:02:26.965583  130544 certs.go:256] generating profile certs ...
	I0617 11:02:26.965682  130544 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.key
	I0617 11:02:26.965713  130544 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.5a42fcf3
	I0617 11:02:26.965734  130544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.5a42fcf3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.134 192.168.39.104 192.168.39.168 192.168.39.254]
	I0617 11:02:27.346654  130544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.5a42fcf3 ...
	I0617 11:02:27.346687  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.5a42fcf3: {Name:mkd4c6893142164db1329d97d9dea3d2cfee3f2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:02:27.346863  130544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.5a42fcf3 ...
	I0617 11:02:27.346877  130544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.5a42fcf3: {Name:mk595a3aab8d45ce8720d08cb91288e4dc42db0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:02:27.346949  130544 certs.go:381] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.5a42fcf3 -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt
	I0617 11:02:27.347091  130544 certs.go:385] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.5a42fcf3 -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key
	I0617 11:02:27.347224  130544 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key
	I0617 11:02:27.347242  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0617 11:02:27.347255  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0617 11:02:27.347268  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0617 11:02:27.347280  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0617 11:02:27.347291  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0617 11:02:27.347303  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0617 11:02:27.347315  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0617 11:02:27.347327  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0617 11:02:27.347371  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 11:02:27.347397  130544 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 11:02:27.347406  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 11:02:27.347427  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 11:02:27.347448  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 11:02:27.347486  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 11:02:27.347523  130544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:02:27.347547  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem -> /usr/share/ca-certificates/120174.pem
	I0617 11:02:27.347561  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /usr/share/ca-certificates/1201742.pem
	I0617 11:02:27.347574  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:02:27.347605  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:02:27.350599  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:02:27.351006  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:02:27.351029  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:02:27.351232  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:02:27.351485  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:02:27.351658  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:02:27.351837  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:02:27.423711  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0617 11:02:27.429061  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0617 11:02:27.441056  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0617 11:02:27.445587  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0617 11:02:27.457976  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0617 11:02:27.462152  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0617 11:02:27.473381  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0617 11:02:27.477655  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0617 11:02:27.488205  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0617 11:02:27.492291  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0617 11:02:27.503178  130544 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0617 11:02:27.507954  130544 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0617 11:02:27.519116  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 11:02:27.545769  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 11:02:27.570587  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 11:02:27.593992  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 11:02:27.620181  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0617 11:02:27.644500  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0617 11:02:27.670181  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 11:02:27.693656  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 11:02:27.718743  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 11:02:27.743939  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 11:02:27.769241  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 11:02:27.793600  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0617 11:02:27.809999  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0617 11:02:27.826764  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0617 11:02:27.843367  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0617 11:02:27.861074  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0617 11:02:27.877824  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0617 11:02:27.894136  130544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0617 11:02:27.910223  130544 ssh_runner.go:195] Run: openssl version
	I0617 11:02:27.916197  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 11:02:27.926817  130544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:02:27.931271  130544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:02:27.931334  130544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:02:27.937173  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 11:02:27.948023  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 11:02:27.958752  130544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 11:02:27.963195  130544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 11:02:27.963240  130544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 11:02:27.969255  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 11:02:27.981676  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 11:02:27.993230  130544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 11:02:27.998102  130544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 11:02:27.998141  130544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 11:02:28.004192  130544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 11:02:28.015790  130544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 11:02:28.020007  130544 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0617 11:02:28.020072  130544 kubeadm.go:928] updating node {m03 192.168.39.168 8443 v1.30.1 crio true true} ...
	I0617 11:02:28.020165  130544 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-064080-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 11:02:28.020193  130544 kube-vip.go:115] generating kube-vip config ...
	I0617 11:02:28.020225  130544 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0617 11:02:28.036731  130544 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0617 11:02:28.036788  130544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0617 11:02:28.036854  130544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 11:02:28.046754  130544 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0617 11:02:28.046811  130544 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0617 11:02:28.056894  130544 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0617 11:02:28.056915  130544 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0617 11:02:28.056924  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0617 11:02:28.056927  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0617 11:02:28.056938  130544 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0617 11:02:28.056993  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:02:28.057015  130544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0617 11:02:28.057015  130544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0617 11:02:28.074561  130544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0617 11:02:28.074594  130544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0617 11:02:28.074617  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0617 11:02:28.074643  130544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0617 11:02:28.074678  130544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0617 11:02:28.074675  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0617 11:02:28.097581  130544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0617 11:02:28.097617  130544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0617 11:02:28.975329  130544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0617 11:02:28.984902  130544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0617 11:02:29.002064  130544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 11:02:29.020433  130544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0617 11:02:29.038500  130544 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0617 11:02:29.042765  130544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:02:29.056272  130544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:02:29.170338  130544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:02:29.187243  130544 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:02:29.187679  130544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:02:29.187726  130544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:02:29.203199  130544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39109
	I0617 11:02:29.203699  130544 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:02:29.204218  130544 main.go:141] libmachine: Using API Version  1
	I0617 11:02:29.204240  130544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:02:29.204546  130544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:02:29.204729  130544 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:02:29.204905  130544 start.go:316] joinCluster: &{Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:02:29.205076  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0617 11:02:29.205101  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:02:29.208123  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:02:29.208613  130544 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:02:29.208647  130544 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:02:29.208827  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:02:29.209010  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:02:29.209216  130544 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:02:29.209368  130544 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:02:29.376289  130544 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:02:29.376346  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vqckf0.7wgygn8yyryvkydn --discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-064080-m03 --control-plane --apiserver-advertise-address=192.168.39.168 --apiserver-bind-port=8443"
	I0617 11:02:53.758250  130544 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vqckf0.7wgygn8yyryvkydn --discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-064080-m03 --control-plane --apiserver-advertise-address=192.168.39.168 --apiserver-bind-port=8443": (24.381868631s)
	I0617 11:02:53.758292  130544 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0617 11:02:54.363546  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-064080-m03 minikube.k8s.io/updated_at=2024_06_17T11_02_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6 minikube.k8s.io/name=ha-064080 minikube.k8s.io/primary=false
	I0617 11:02:54.502092  130544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-064080-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0617 11:02:54.621243  130544 start.go:318] duration metric: took 25.416333651s to joinCluster
	I0617 11:02:54.621344  130544 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:02:54.623072  130544 out.go:177] * Verifying Kubernetes components...
	I0617 11:02:54.621808  130544 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:02:54.624356  130544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:02:54.928732  130544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:02:54.976589  130544 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:02:54.976821  130544 kapi.go:59] client config for ha-064080: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.crt", KeyFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.key", CAFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfaf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0617 11:02:54.976882  130544 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.134:8443
	I0617 11:02:54.977098  130544 node_ready.go:35] waiting up to 6m0s for node "ha-064080-m03" to be "Ready" ...
	I0617 11:02:54.977171  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:54.977177  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:54.977184  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:54.977190  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:54.980888  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:55.477835  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:55.477866  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:55.477878  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:55.477883  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:55.481461  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:55.977724  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:55.977748  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:55.977760  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:55.977764  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:55.983343  130544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0617 11:02:56.477632  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:56.477658  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:56.477668  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:56.477671  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:56.483165  130544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0617 11:02:56.977402  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:56.977423  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:56.977435  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:56.977439  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:56.981146  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:56.981717  130544 node_ready.go:53] node "ha-064080-m03" has status "Ready":"False"
	I0617 11:02:57.478133  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:57.478160  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:57.478169  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:57.478174  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:57.481394  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:57.977332  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:57.977357  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:57.977368  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:57.977373  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:57.980538  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:57.981163  130544 node_ready.go:49] node "ha-064080-m03" has status "Ready":"True"
	I0617 11:02:57.981181  130544 node_ready.go:38] duration metric: took 3.004068832s for node "ha-064080-m03" to be "Ready" ...
	I0617 11:02:57.981189  130544 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 11:02:57.981251  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:02:57.981260  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:57.981268  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:57.981273  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:57.988008  130544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0617 11:02:57.994247  130544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xbhnm" in "kube-system" namespace to be "Ready" ...
	I0617 11:02:57.994341  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xbhnm
	I0617 11:02:57.994349  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:57.994357  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:57.994361  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:57.997345  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:02:57.997924  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:02:57.997939  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:57.997946  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:57.997950  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.000731  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:02:58.001299  130544 pod_ready.go:92] pod "coredns-7db6d8ff4d-xbhnm" in "kube-system" namespace has status "Ready":"True"
	I0617 11:02:58.001317  130544 pod_ready.go:81] duration metric: took 7.043245ms for pod "coredns-7db6d8ff4d-xbhnm" in "kube-system" namespace to be "Ready" ...
	I0617 11:02:58.001326  130544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zv99k" in "kube-system" namespace to be "Ready" ...
	I0617 11:02:58.001380  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zv99k
	I0617 11:02:58.001387  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.001394  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.001399  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.004801  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:58.005785  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:02:58.005803  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.005810  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.005815  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.008950  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:58.009623  130544 pod_ready.go:92] pod "coredns-7db6d8ff4d-zv99k" in "kube-system" namespace has status "Ready":"True"
	I0617 11:02:58.009639  130544 pod_ready.go:81] duration metric: took 8.306009ms for pod "coredns-7db6d8ff4d-zv99k" in "kube-system" namespace to be "Ready" ...
	I0617 11:02:58.009648  130544 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:02:58.009709  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080
	I0617 11:02:58.009716  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.009722  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.009738  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.018113  130544 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0617 11:02:58.018873  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:02:58.018891  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.018899  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.018906  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.021503  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:02:58.022150  130544 pod_ready.go:92] pod "etcd-ha-064080" in "kube-system" namespace has status "Ready":"True"
	I0617 11:02:58.022172  130544 pod_ready.go:81] duration metric: took 12.51598ms for pod "etcd-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:02:58.022181  130544 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:02:58.022250  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m02
	I0617 11:02:58.022259  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.022265  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.022270  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.025096  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:02:58.025830  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:02:58.025844  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.025851  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.025855  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.028549  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:02:58.029269  130544 pod_ready.go:92] pod "etcd-ha-064080-m02" in "kube-system" namespace has status "Ready":"True"
	I0617 11:02:58.029286  130544 pod_ready.go:81] duration metric: took 7.099151ms for pod "etcd-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:02:58.029295  130544 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-064080-m03" in "kube-system" namespace to be "Ready" ...
	I0617 11:02:58.177735  130544 request.go:629] Waited for 148.339851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:02:58.177823  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:02:58.177845  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.177856  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.177862  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.181053  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:58.378135  130544 request.go:629] Waited for 196.2227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:58.378216  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:58.378230  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.378243  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.378253  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.381451  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:58.577579  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:02:58.577605  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.577615  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.577618  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.581575  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:58.777922  130544 request.go:629] Waited for 195.390769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:58.778019  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:58.778034  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:58.778046  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:58.778052  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:58.781491  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:59.030332  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:02:59.030362  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:59.030370  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:59.030376  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:59.034505  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:02:59.178008  130544 request.go:629] Waited for 142.332037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:59.178089  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:59.178094  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:59.178104  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:59.178110  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:59.181625  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:59.530426  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:02:59.530449  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:59.530457  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:59.530462  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:59.534300  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:02:59.578306  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:02:59.578330  130544 round_trippers.go:469] Request Headers:
	I0617 11:02:59.578339  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:02:59.578343  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:02:59.581973  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:00.029789  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:00.029813  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:00.029822  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:00.029830  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:00.034036  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:03:00.034811  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:00.034829  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:00.034839  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:00.034843  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:00.038006  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:00.038714  130544 pod_ready.go:102] pod "etcd-ha-064080-m03" in "kube-system" namespace has status "Ready":"False"
	I0617 11:03:00.529993  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:00.530018  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:00.530026  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:00.530031  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:00.533373  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:00.534207  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:00.534223  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:00.534230  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:00.534233  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:00.537139  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:01.030160  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:01.030191  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:01.030202  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:01.030207  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:01.033797  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:01.034766  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:01.034783  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:01.034790  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:01.034793  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:01.037769  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:01.529757  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:01.529783  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:01.529794  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:01.529800  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:01.533251  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:01.533916  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:01.533936  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:01.533946  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:01.533951  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:01.536991  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:02.030427  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:02.030452  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:02.030460  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:02.030464  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:02.034591  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:03:02.035319  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:02.035333  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:02.035340  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:02.035345  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:02.038729  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:02.039365  130544 pod_ready.go:102] pod "etcd-ha-064080-m03" in "kube-system" namespace has status "Ready":"False"
	I0617 11:03:02.529915  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:02.531841  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:02.531860  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:02.531868  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:02.535304  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:02.536246  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:02.536262  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:02.536269  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:02.536273  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:02.539028  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:03.030120  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:03.030142  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:03.030153  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:03.030160  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:03.033605  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:03.034300  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:03.034317  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:03.034324  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:03.034328  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:03.036991  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:03.529561  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:03.529583  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:03.529592  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:03.529597  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:03.532466  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:03.533366  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:03.533379  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:03.533385  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:03.533388  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:03.536103  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:04.030080  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:04.030103  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:04.030111  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:04.030115  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:04.033835  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:04.034519  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:04.034537  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:04.034544  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:04.034549  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:04.037538  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:04.530324  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:04.530350  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:04.530361  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:04.530367  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:04.534457  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:03:04.535199  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:04.535215  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:04.535223  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:04.535228  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:04.538147  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:04.538657  130544 pod_ready.go:102] pod "etcd-ha-064080-m03" in "kube-system" namespace has status "Ready":"False"
	I0617 11:03:05.030260  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:05.030286  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:05.030296  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:05.030300  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:05.036759  130544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0617 11:03:05.037325  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:05.037339  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:05.037347  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:05.037353  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:05.040015  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:05.529797  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:05.529820  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:05.529828  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:05.529832  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:05.533167  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:05.533782  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:05.533801  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:05.533811  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:05.533816  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:05.536491  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:06.029485  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:06.029511  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:06.029519  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:06.029524  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:06.032746  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:06.033527  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:06.033543  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:06.033550  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:06.033553  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:06.036533  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:06.530408  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:06.530433  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:06.530443  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:06.530450  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:06.534403  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:06.535100  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:06.535117  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:06.535125  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:06.535128  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:06.538264  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:06.538801  130544 pod_ready.go:102] pod "etcd-ha-064080-m03" in "kube-system" namespace has status "Ready":"False"
	I0617 11:03:07.029776  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:07.029806  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:07.029815  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:07.029819  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:07.033711  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:07.034504  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:07.034522  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:07.034529  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:07.034534  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:07.037447  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:07.530264  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:07.530979  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:07.530994  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:07.531001  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:07.534515  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:07.535314  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:07.535331  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:07.535341  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:07.535347  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:07.538490  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:08.029495  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:08.029519  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:08.029527  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:08.029532  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:08.032595  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:08.033648  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:08.033663  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:08.033670  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:08.033674  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:08.036222  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:08.530270  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:08.530300  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:08.530308  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:08.530312  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:08.533945  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:08.534577  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:08.534595  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:08.534602  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:08.534607  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:08.537359  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:09.030234  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:09.030261  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:09.030272  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:09.030278  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:09.033609  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:09.034281  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:09.034295  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:09.034302  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:09.034306  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:09.038850  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:03:09.039449  130544 pod_ready.go:102] pod "etcd-ha-064080-m03" in "kube-system" namespace has status "Ready":"False"
	I0617 11:03:09.529661  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:09.529685  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:09.529696  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:09.529702  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:09.533224  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:09.534098  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:09.534118  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:09.534130  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:09.534138  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:09.536844  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:10.029849  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:10.029874  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:10.029882  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:10.029885  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:10.034785  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:03:10.035846  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:10.035866  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:10.035877  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:10.035884  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:10.041125  130544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0617 11:03:10.529908  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:10.529934  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:10.529942  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:10.529948  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:10.533432  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:10.534022  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:10.534038  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:10.534045  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:10.534049  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:10.537102  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:11.029476  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/etcd-ha-064080-m03
	I0617 11:03:11.029499  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.029508  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.029511  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.032960  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:11.033850  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:11.033866  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.033873  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.033878  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.037003  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:11.037525  130544 pod_ready.go:92] pod "etcd-ha-064080-m03" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:11.037543  130544 pod_ready.go:81] duration metric: took 13.008242382s for pod "etcd-ha-064080-m03" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.037560  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.037610  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-064080
	I0617 11:03:11.037618  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.037625  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.037630  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.040168  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:11.040649  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:03:11.040664  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.040670  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.040674  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.042899  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:11.043471  130544 pod_ready.go:92] pod "kube-apiserver-ha-064080" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:11.043493  130544 pod_ready.go:81] duration metric: took 5.925806ms for pod "kube-apiserver-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.043509  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.043582  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-064080-m02
	I0617 11:03:11.043598  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.043605  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.043609  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.046252  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:11.046790  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:03:11.046810  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.046820  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.046825  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.049907  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:11.050450  130544 pod_ready.go:92] pod "kube-apiserver-ha-064080-m02" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:11.050469  130544 pod_ready.go:81] duration metric: took 6.946564ms for pod "kube-apiserver-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.050481  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-064080-m03" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.050550  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-064080-m03
	I0617 11:03:11.050561  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.050570  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.050587  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.053362  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:11.053882  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:11.053896  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.053903  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.053906  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.055951  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:11.056469  130544 pod_ready.go:92] pod "kube-apiserver-ha-064080-m03" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:11.056488  130544 pod_ready.go:81] duration metric: took 5.999556ms for pod "kube-apiserver-ha-064080-m03" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.056499  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.056560  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-064080
	I0617 11:03:11.056570  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.056576  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.056579  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.058807  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:11.059285  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:03:11.059300  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.059310  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.059317  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.062249  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:11.062691  130544 pod_ready.go:92] pod "kube-controller-manager-ha-064080" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:11.062708  130544 pod_ready.go:81] duration metric: took 6.198978ms for pod "kube-controller-manager-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.062716  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.230137  130544 request.go:629] Waited for 167.33334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-064080-m02
	I0617 11:03:11.230243  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-064080-m02
	I0617 11:03:11.230252  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.230259  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.230264  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.233702  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:11.429819  130544 request.go:629] Waited for 195.374298ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:03:11.429900  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:03:11.429909  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.429922  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.429932  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.433247  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:11.433994  130544 pod_ready.go:92] pod "kube-controller-manager-ha-064080-m02" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:11.434012  130544 pod_ready.go:81] duration metric: took 371.280201ms for pod "kube-controller-manager-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.434027  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-064080-m03" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.630085  130544 request.go:629] Waited for 195.990584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-064080-m03
	I0617 11:03:11.630165  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-064080-m03
	I0617 11:03:11.630177  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.630188  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.630192  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.633910  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:11.830078  130544 request.go:629] Waited for 195.336696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:11.830245  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:11.830265  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:11.830274  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:11.830280  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:11.833253  130544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0617 11:03:11.833727  130544 pod_ready.go:92] pod "kube-controller-manager-ha-064080-m03" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:11.833745  130544 pod_ready.go:81] duration metric: took 399.711192ms for pod "kube-controller-manager-ha-064080-m03" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:11.833760  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dd48x" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:12.029735  130544 request.go:629] Waited for 195.885682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dd48x
	I0617 11:03:12.029820  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dd48x
	I0617 11:03:12.029826  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:12.029833  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:12.029838  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:12.033462  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:12.229471  130544 request.go:629] Waited for 195.320421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:03:12.229592  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:03:12.229607  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:12.229622  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:12.229627  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:12.233005  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:12.233612  130544 pod_ready.go:92] pod "kube-proxy-dd48x" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:12.233633  130544 pod_ready.go:81] duration metric: took 399.866858ms for pod "kube-proxy-dd48x" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:12.233642  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gsph4" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:12.429606  130544 request.go:629] Waited for 195.875153ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gsph4
	I0617 11:03:12.429698  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gsph4
	I0617 11:03:12.429720  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:12.429732  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:12.429744  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:12.433258  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:12.630263  130544 request.go:629] Waited for 196.294759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:12.630379  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:12.630392  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:12.630402  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:12.630411  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:12.633843  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:12.634564  130544 pod_ready.go:92] pod "kube-proxy-gsph4" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:12.634584  130544 pod_ready.go:81] duration metric: took 400.935712ms for pod "kube-proxy-gsph4" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:12.634594  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l55dg" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:12.829973  130544 request.go:629] Waited for 195.299876ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l55dg
	I0617 11:03:12.830058  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l55dg
	I0617 11:03:12.830069  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:12.830079  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:12.830086  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:12.835375  130544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0617 11:03:13.030096  130544 request.go:629] Waited for 193.378159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:03:13.030154  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:03:13.030159  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:13.030172  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:13.030180  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:13.033911  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:13.034559  130544 pod_ready.go:92] pod "kube-proxy-l55dg" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:13.034580  130544 pod_ready.go:81] duration metric: took 399.971993ms for pod "kube-proxy-l55dg" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:13.034594  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:13.229748  130544 request.go:629] Waited for 195.082264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-064080
	I0617 11:03:13.229832  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-064080
	I0617 11:03:13.229841  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:13.229848  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:13.229856  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:13.233062  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:13.430214  130544 request.go:629] Waited for 196.300524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:03:13.430308  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080
	I0617 11:03:13.430320  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:13.430332  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:13.430342  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:13.434438  130544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0617 11:03:13.435749  130544 pod_ready.go:92] pod "kube-scheduler-ha-064080" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:13.435780  130544 pod_ready.go:81] duration metric: took 401.178173ms for pod "kube-scheduler-ha-064080" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:13.435792  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:13.629874  130544 request.go:629] Waited for 193.97052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-064080-m02
	I0617 11:03:13.629941  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-064080-m02
	I0617 11:03:13.629946  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:13.629954  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:13.629959  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:13.633875  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:13.830050  130544 request.go:629] Waited for 195.38029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:03:13.830130  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m02
	I0617 11:03:13.830136  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:13.830143  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:13.830149  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:13.833452  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:13.834027  130544 pod_ready.go:92] pod "kube-scheduler-ha-064080-m02" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:13.834046  130544 pod_ready.go:81] duration metric: took 398.247321ms for pod "kube-scheduler-ha-064080-m02" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:13.834055  130544 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-064080-m03" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:14.030151  130544 request.go:629] Waited for 196.001537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-064080-m03
	I0617 11:03:14.030214  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-064080-m03
	I0617 11:03:14.030220  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:14.030227  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:14.030231  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:14.033564  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:14.229710  130544 request.go:629] Waited for 195.337834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:14.229776  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes/ha-064080-m03
	I0617 11:03:14.229783  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:14.229792  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:14.229799  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:14.232943  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:14.233953  130544 pod_ready.go:92] pod "kube-scheduler-ha-064080-m03" in "kube-system" namespace has status "Ready":"True"
	I0617 11:03:14.233977  130544 pod_ready.go:81] duration metric: took 399.914748ms for pod "kube-scheduler-ha-064080-m03" in "kube-system" namespace to be "Ready" ...
	I0617 11:03:14.233992  130544 pod_ready.go:38] duration metric: took 16.252791367s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 11:03:14.234013  130544 api_server.go:52] waiting for apiserver process to appear ...
	I0617 11:03:14.234081  130544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:03:14.249706  130544 api_server.go:72] duration metric: took 19.628325256s to wait for apiserver process to appear ...
	I0617 11:03:14.249730  130544 api_server.go:88] waiting for apiserver healthz status ...
	I0617 11:03:14.249748  130544 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0617 11:03:14.254222  130544 api_server.go:279] https://192.168.39.134:8443/healthz returned 200:
	ok
	I0617 11:03:14.254277  130544 round_trippers.go:463] GET https://192.168.39.134:8443/version
	I0617 11:03:14.254285  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:14.254292  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:14.254295  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:14.255440  130544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0617 11:03:14.255530  130544 api_server.go:141] control plane version: v1.30.1
	I0617 11:03:14.255547  130544 api_server.go:131] duration metric: took 5.810118ms to wait for apiserver health ...
	I0617 11:03:14.255553  130544 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 11:03:14.429974  130544 request.go:629] Waited for 174.330557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:03:14.430051  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:03:14.430058  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:14.430070  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:14.430076  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:14.438031  130544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0617 11:03:14.448492  130544 system_pods.go:59] 24 kube-system pods found
	I0617 11:03:14.448519  130544 system_pods.go:61] "coredns-7db6d8ff4d-xbhnm" [be37a6ec-2a49-4a56-b8a3-0da865edb05d] Running
	I0617 11:03:14.448524  130544 system_pods.go:61] "coredns-7db6d8ff4d-zv99k" [c2453fd4-894d-4212-bc48-1803e28ddba8] Running
	I0617 11:03:14.448528  130544 system_pods.go:61] "etcd-ha-064080" [f7a1e80e-8ebc-496b-8919-ebf99a8dd4b4] Running
	I0617 11:03:14.448531  130544 system_pods.go:61] "etcd-ha-064080-m02" [7de6c88f-a0b9-4fa3-b4aa-e964191aa4e5] Running
	I0617 11:03:14.448535  130544 system_pods.go:61] "etcd-ha-064080-m03" [228b9fe2-a269-42b7-8c5e-09fdd0ff9b3a] Running
	I0617 11:03:14.448539  130544 system_pods.go:61] "kindnet-48mb7" [67422049-6637-4ca3-8bd1-2b47a265829d] Running
	I0617 11:03:14.448542  130544 system_pods.go:61] "kindnet-5mg7w" [0d4c6fae-77e8-4e1a-b96f-166696984275] Running
	I0617 11:03:14.448545  130544 system_pods.go:61] "kindnet-7cqp4" [f4671f39-ca07-4520-bc35-dce8e53318de] Running
	I0617 11:03:14.448548  130544 system_pods.go:61] "kube-apiserver-ha-064080" [fd326be1-2b78-41e8-9b57-138ffdadac71] Running
	I0617 11:03:14.448552  130544 system_pods.go:61] "kube-apiserver-ha-064080-m02" [74164e88-591d-490e-b4f9-1d8ea635cd2d] Running
	I0617 11:03:14.448555  130544 system_pods.go:61] "kube-apiserver-ha-064080-m03" [8d441ecd-ed28-42b3-a5fc-38b9f8acd9fe] Running
	I0617 11:03:14.448558  130544 system_pods.go:61] "kube-controller-manager-ha-064080" [142a6154-fcbf-4d5d-a222-21d1b46720cb] Running
	I0617 11:03:14.448561  130544 system_pods.go:61] "kube-controller-manager-ha-064080-m02" [f096dd77-2f79-479e-bd06-b02c942200c6] Running
	I0617 11:03:14.448564  130544 system_pods.go:61] "kube-controller-manager-ha-064080-m03" [e3289fce-4b45-4c3d-b826-628d6951e78c] Running
	I0617 11:03:14.448567  130544 system_pods.go:61] "kube-proxy-dd48x" [e1bd1d47-a8a5-47a5-820c-dd86f7ea7765] Running
	I0617 11:03:14.448570  130544 system_pods.go:61] "kube-proxy-gsph4" [541b12cf-3e15-45e1-8c97-0c28e8b17e2a] Running
	I0617 11:03:14.448573  130544 system_pods.go:61] "kube-proxy-l55dg" [1d827d6c-0432-4162-924c-d43b66b08c26] Running
	I0617 11:03:14.448576  130544 system_pods.go:61] "kube-scheduler-ha-064080" [f9e62714-7ec7-47a9-ab16-6afada18c6d8] Running
	I0617 11:03:14.448580  130544 system_pods.go:61] "kube-scheduler-ha-064080-m02" [ec804903-8a64-4a3d-8843-9d2ec21d7158] Running
	I0617 11:03:14.448583  130544 system_pods.go:61] "kube-scheduler-ha-064080-m03" [e33dbdc2-c3b4-489d-8fe0-e458da065d42] Running
	I0617 11:03:14.448586  130544 system_pods.go:61] "kube-vip-ha-064080" [6b9259b1-ee46-4493-ba10-dcb32da03f57] Running
	I0617 11:03:14.448589  130544 system_pods.go:61] "kube-vip-ha-064080-m02" [8a4ad095-97bf-4a1f-8579-9e6a564f24ed] Running
	I0617 11:03:14.448592  130544 system_pods.go:61] "kube-vip-ha-064080-m03" [a6754167-2759-44c2-bdb6-2fe9d8b601fd] Running
	I0617 11:03:14.448595  130544 system_pods.go:61] "storage-provisioner" [5646fca8-9ebc-47c1-b5ff-c87b0ed800d8] Running
	I0617 11:03:14.448601  130544 system_pods.go:74] duration metric: took 193.042133ms to wait for pod list to return data ...
	I0617 11:03:14.448610  130544 default_sa.go:34] waiting for default service account to be created ...
	I0617 11:03:14.629501  130544 request.go:629] Waited for 180.813341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/default/serviceaccounts
	I0617 11:03:14.629566  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/default/serviceaccounts
	I0617 11:03:14.629571  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:14.629578  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:14.629583  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:14.632904  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:14.633034  130544 default_sa.go:45] found service account: "default"
	I0617 11:03:14.633051  130544 default_sa.go:55] duration metric: took 184.434282ms for default service account to be created ...
	I0617 11:03:14.633062  130544 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 11:03:14.830413  130544 request.go:629] Waited for 197.271917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:03:14.830474  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/namespaces/kube-system/pods
	I0617 11:03:14.830480  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:14.830488  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:14.830492  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:14.837111  130544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0617 11:03:14.844000  130544 system_pods.go:86] 24 kube-system pods found
	I0617 11:03:14.844025  130544 system_pods.go:89] "coredns-7db6d8ff4d-xbhnm" [be37a6ec-2a49-4a56-b8a3-0da865edb05d] Running
	I0617 11:03:14.844030  130544 system_pods.go:89] "coredns-7db6d8ff4d-zv99k" [c2453fd4-894d-4212-bc48-1803e28ddba8] Running
	I0617 11:03:14.844034  130544 system_pods.go:89] "etcd-ha-064080" [f7a1e80e-8ebc-496b-8919-ebf99a8dd4b4] Running
	I0617 11:03:14.844038  130544 system_pods.go:89] "etcd-ha-064080-m02" [7de6c88f-a0b9-4fa3-b4aa-e964191aa4e5] Running
	I0617 11:03:14.844042  130544 system_pods.go:89] "etcd-ha-064080-m03" [228b9fe2-a269-42b7-8c5e-09fdd0ff9b3a] Running
	I0617 11:03:14.844047  130544 system_pods.go:89] "kindnet-48mb7" [67422049-6637-4ca3-8bd1-2b47a265829d] Running
	I0617 11:03:14.844051  130544 system_pods.go:89] "kindnet-5mg7w" [0d4c6fae-77e8-4e1a-b96f-166696984275] Running
	I0617 11:03:14.844055  130544 system_pods.go:89] "kindnet-7cqp4" [f4671f39-ca07-4520-bc35-dce8e53318de] Running
	I0617 11:03:14.844059  130544 system_pods.go:89] "kube-apiserver-ha-064080" [fd326be1-2b78-41e8-9b57-138ffdadac71] Running
	I0617 11:03:14.844063  130544 system_pods.go:89] "kube-apiserver-ha-064080-m02" [74164e88-591d-490e-b4f9-1d8ea635cd2d] Running
	I0617 11:03:14.844067  130544 system_pods.go:89] "kube-apiserver-ha-064080-m03" [8d441ecd-ed28-42b3-a5fc-38b9f8acd9fe] Running
	I0617 11:03:14.844073  130544 system_pods.go:89] "kube-controller-manager-ha-064080" [142a6154-fcbf-4d5d-a222-21d1b46720cb] Running
	I0617 11:03:14.844081  130544 system_pods.go:89] "kube-controller-manager-ha-064080-m02" [f096dd77-2f79-479e-bd06-b02c942200c6] Running
	I0617 11:03:14.844086  130544 system_pods.go:89] "kube-controller-manager-ha-064080-m03" [e3289fce-4b45-4c3d-b826-628d6951e78c] Running
	I0617 11:03:14.844090  130544 system_pods.go:89] "kube-proxy-dd48x" [e1bd1d47-a8a5-47a5-820c-dd86f7ea7765] Running
	I0617 11:03:14.844094  130544 system_pods.go:89] "kube-proxy-gsph4" [541b12cf-3e15-45e1-8c97-0c28e8b17e2a] Running
	I0617 11:03:14.844102  130544 system_pods.go:89] "kube-proxy-l55dg" [1d827d6c-0432-4162-924c-d43b66b08c26] Running
	I0617 11:03:14.844106  130544 system_pods.go:89] "kube-scheduler-ha-064080" [f9e62714-7ec7-47a9-ab16-6afada18c6d8] Running
	I0617 11:03:14.844112  130544 system_pods.go:89] "kube-scheduler-ha-064080-m02" [ec804903-8a64-4a3d-8843-9d2ec21d7158] Running
	I0617 11:03:14.844116  130544 system_pods.go:89] "kube-scheduler-ha-064080-m03" [e33dbdc2-c3b4-489d-8fe0-e458da065d42] Running
	I0617 11:03:14.844122  130544 system_pods.go:89] "kube-vip-ha-064080" [6b9259b1-ee46-4493-ba10-dcb32da03f57] Running
	I0617 11:03:14.844125  130544 system_pods.go:89] "kube-vip-ha-064080-m02" [8a4ad095-97bf-4a1f-8579-9e6a564f24ed] Running
	I0617 11:03:14.844130  130544 system_pods.go:89] "kube-vip-ha-064080-m03" [a6754167-2759-44c2-bdb6-2fe9d8b601fd] Running
	I0617 11:03:14.844134  130544 system_pods.go:89] "storage-provisioner" [5646fca8-9ebc-47c1-b5ff-c87b0ed800d8] Running
	I0617 11:03:14.844143  130544 system_pods.go:126] duration metric: took 211.071081ms to wait for k8s-apps to be running ...
	I0617 11:03:14.844150  130544 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 11:03:14.844195  130544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:03:14.860938  130544 system_svc.go:56] duration metric: took 16.775634ms WaitForService to wait for kubelet
	I0617 11:03:14.860973  130544 kubeadm.go:576] duration metric: took 20.239595677s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:03:14.860999  130544 node_conditions.go:102] verifying NodePressure condition ...
	I0617 11:03:15.030462  130544 request.go:629] Waited for 169.336616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.134:8443/api/v1/nodes
	I0617 11:03:15.030529  130544 round_trippers.go:463] GET https://192.168.39.134:8443/api/v1/nodes
	I0617 11:03:15.030541  130544 round_trippers.go:469] Request Headers:
	I0617 11:03:15.030552  130544 round_trippers.go:473]     Accept: application/json, */*
	I0617 11:03:15.030563  130544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0617 11:03:15.033962  130544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0617 11:03:15.035161  130544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 11:03:15.035183  130544 node_conditions.go:123] node cpu capacity is 2
	I0617 11:03:15.035200  130544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 11:03:15.035206  130544 node_conditions.go:123] node cpu capacity is 2
	I0617 11:03:15.035212  130544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 11:03:15.035221  130544 node_conditions.go:123] node cpu capacity is 2
	I0617 11:03:15.035227  130544 node_conditions.go:105] duration metric: took 174.222144ms to run NodePressure ...
	I0617 11:03:15.035245  130544 start.go:240] waiting for startup goroutines ...
	I0617 11:03:15.035270  130544 start.go:254] writing updated cluster config ...
	I0617 11:03:15.035660  130544 ssh_runner.go:195] Run: rm -f paused
	I0617 11:03:15.086530  130544 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 11:03:15.088850  130544 out.go:177] * Done! kubectl is now configured to use "ha-064080" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.465739765Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718622464465717919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1f29a56-68ca-47c5-97a1-ee73e98c6de9 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.466442809Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b47ab874-d666-4902-b861-39dc7c3eb5ab name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.466498589Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b47ab874-d666-4902-b861-39dc7c3eb5ab name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.466755201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a562b9195d78591133b90abc121faa5dbf34feac5066f4f821669a5b8c27e85,PodSandboxId:32924073f320b5367b28757d06fe232b7af64ccf6539c044b32541c03c8b9cc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718622197449697447,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kubernetes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328,PodSandboxId:20be829b9ffef66a57eb936abd30f0a0daa6277806fc399919edde5c9193aa94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622049377736889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c,PodSandboxId:54a9c95a1ef70b178265a9c78e9dbcddfb9f8cb7ddc312e0e324a4f449b6ebc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622049372976570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9fa67df5a3f15517f0cc5493139c9ec692bbadbef748f1315698a8ae05601f,PodSandboxId:f9df57723b165a731e239a6ef5aa2bc8caad54a36061dfb7afcd1021c1962f8b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1718622049320124723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be33376c9348ffc6f1e2f31be21508d4aa16ebb1729b2780dabed95ba3ec9bbc,PodSandboxId:f67453c7d28830b38751fef3fd549d9fc1c2196b59ab402fdb76c2baae9174af,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718622047566527858,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb,PodSandboxId:78661140f722ccccbbef01859ed0a403a118690cd55dd92f4d2cf08d1c03af3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171862204
5688267141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24495c319c5c94afe6d0b59a3e9bc367b4539472e5846002db4fc1b802fac288,PodSandboxId:502e1e8fec2b89c90310f59069521c2fdde5e165e725bec6e1cbab4ef89951dd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17186220280
13380615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ffa31c75020c2c61ed38418bc6b9660,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf5516bbfc1d7ca0c4a0ebc2026888f4c7754891f8a6cfa30b49ea80c4c6a1b,PodSandboxId:5f8d58d694025bb9c7d62e4497344e57a4f85fbaaacc72882f259fd69bf8b688,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718622025755316625,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be01152b9ab18f70b88322e4262f33d332dd8aa951d6262c8ac130261de6479d,PodSandboxId:4b79ce1b27f110ccadaad87cef79c43a9db99fbaa28089b3617bf2d74bb5b811,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718622025707947555,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kubernetes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad,PodSandboxId:7293d250b3e0dd840434d7afd153d17ac7842ec4f356edd9bac3f40f6603de1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718622025699829962,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[string]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328,PodSandboxId:cb4974ce47c357bdbcfd6dd322289bd64cf2cbb3c4a7ad3e2ee523444ebfc04e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718622025592353826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b47ab874-d666-4902-b861-39dc7c3eb5ab name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.513033682Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2312e005-61cc-481b-8521-00529d6b90d8 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.513130173Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2312e005-61cc-481b-8521-00529d6b90d8 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.514415939Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad1a4f6c-ba52-4467-be67-f6f8cef9b797 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.514969778Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718622464514948241,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad1a4f6c-ba52-4467-be67-f6f8cef9b797 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.515526211Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c158e15-b588-4ff5-b26f-dc5e0c6c9747 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.515581655Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c158e15-b588-4ff5-b26f-dc5e0c6c9747 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.515932743Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a562b9195d78591133b90abc121faa5dbf34feac5066f4f821669a5b8c27e85,PodSandboxId:32924073f320b5367b28757d06fe232b7af64ccf6539c044b32541c03c8b9cc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718622197449697447,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kubernetes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328,PodSandboxId:20be829b9ffef66a57eb936abd30f0a0daa6277806fc399919edde5c9193aa94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622049377736889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c,PodSandboxId:54a9c95a1ef70b178265a9c78e9dbcddfb9f8cb7ddc312e0e324a4f449b6ebc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622049372976570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9fa67df5a3f15517f0cc5493139c9ec692bbadbef748f1315698a8ae05601f,PodSandboxId:f9df57723b165a731e239a6ef5aa2bc8caad54a36061dfb7afcd1021c1962f8b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1718622049320124723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be33376c9348ffc6f1e2f31be21508d4aa16ebb1729b2780dabed95ba3ec9bbc,PodSandboxId:f67453c7d28830b38751fef3fd549d9fc1c2196b59ab402fdb76c2baae9174af,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718622047566527858,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb,PodSandboxId:78661140f722ccccbbef01859ed0a403a118690cd55dd92f4d2cf08d1c03af3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171862204
5688267141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24495c319c5c94afe6d0b59a3e9bc367b4539472e5846002db4fc1b802fac288,PodSandboxId:502e1e8fec2b89c90310f59069521c2fdde5e165e725bec6e1cbab4ef89951dd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17186220280
13380615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ffa31c75020c2c61ed38418bc6b9660,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf5516bbfc1d7ca0c4a0ebc2026888f4c7754891f8a6cfa30b49ea80c4c6a1b,PodSandboxId:5f8d58d694025bb9c7d62e4497344e57a4f85fbaaacc72882f259fd69bf8b688,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718622025755316625,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be01152b9ab18f70b88322e4262f33d332dd8aa951d6262c8ac130261de6479d,PodSandboxId:4b79ce1b27f110ccadaad87cef79c43a9db99fbaa28089b3617bf2d74bb5b811,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718622025707947555,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kubernetes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad,PodSandboxId:7293d250b3e0dd840434d7afd153d17ac7842ec4f356edd9bac3f40f6603de1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718622025699829962,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[string]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328,PodSandboxId:cb4974ce47c357bdbcfd6dd322289bd64cf2cbb3c4a7ad3e2ee523444ebfc04e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718622025592353826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c158e15-b588-4ff5-b26f-dc5e0c6c9747 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.568620439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=037cfb39-625a-446c-8716-af26ca82b694 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.568815567Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=037cfb39-625a-446c-8716-af26ca82b694 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.579086957Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0e2461e-dc6c-4760-a7dc-6efab0f157b4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.579929298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718622464579826042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0e2461e-dc6c-4760-a7dc-6efab0f157b4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.580609800Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a7f0c61-34f5-4a30-822e-a3c3a5a2b6a6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.580710106Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a7f0c61-34f5-4a30-822e-a3c3a5a2b6a6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.581131584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a562b9195d78591133b90abc121faa5dbf34feac5066f4f821669a5b8c27e85,PodSandboxId:32924073f320b5367b28757d06fe232b7af64ccf6539c044b32541c03c8b9cc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718622197449697447,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kubernetes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328,PodSandboxId:20be829b9ffef66a57eb936abd30f0a0daa6277806fc399919edde5c9193aa94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622049377736889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c,PodSandboxId:54a9c95a1ef70b178265a9c78e9dbcddfb9f8cb7ddc312e0e324a4f449b6ebc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622049372976570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9fa67df5a3f15517f0cc5493139c9ec692bbadbef748f1315698a8ae05601f,PodSandboxId:f9df57723b165a731e239a6ef5aa2bc8caad54a36061dfb7afcd1021c1962f8b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1718622049320124723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be33376c9348ffc6f1e2f31be21508d4aa16ebb1729b2780dabed95ba3ec9bbc,PodSandboxId:f67453c7d28830b38751fef3fd549d9fc1c2196b59ab402fdb76c2baae9174af,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718622047566527858,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb,PodSandboxId:78661140f722ccccbbef01859ed0a403a118690cd55dd92f4d2cf08d1c03af3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171862204
5688267141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24495c319c5c94afe6d0b59a3e9bc367b4539472e5846002db4fc1b802fac288,PodSandboxId:502e1e8fec2b89c90310f59069521c2fdde5e165e725bec6e1cbab4ef89951dd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17186220280
13380615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ffa31c75020c2c61ed38418bc6b9660,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf5516bbfc1d7ca0c4a0ebc2026888f4c7754891f8a6cfa30b49ea80c4c6a1b,PodSandboxId:5f8d58d694025bb9c7d62e4497344e57a4f85fbaaacc72882f259fd69bf8b688,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718622025755316625,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be01152b9ab18f70b88322e4262f33d332dd8aa951d6262c8ac130261de6479d,PodSandboxId:4b79ce1b27f110ccadaad87cef79c43a9db99fbaa28089b3617bf2d74bb5b811,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718622025707947555,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kubernetes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad,PodSandboxId:7293d250b3e0dd840434d7afd153d17ac7842ec4f356edd9bac3f40f6603de1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718622025699829962,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[string]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328,PodSandboxId:cb4974ce47c357bdbcfd6dd322289bd64cf2cbb3c4a7ad3e2ee523444ebfc04e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718622025592353826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a7f0c61-34f5-4a30-822e-a3c3a5a2b6a6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.624796510Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a16edbb6-7615-44df-a71f-3abcc48b5ede name=/runtime.v1.RuntimeService/Version
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.624936071Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a16edbb6-7615-44df-a71f-3abcc48b5ede name=/runtime.v1.RuntimeService/Version
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.626011150Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9bfacc16-2e69-44fb-bb2b-79b6569c7b04 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.626444175Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718622464626423112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9bfacc16-2e69-44fb-bb2b-79b6569c7b04 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.627074127Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=068c2f7d-373a-42e5-96fc-c33ebd97f238 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.627145218Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=068c2f7d-373a-42e5-96fc-c33ebd97f238 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:07:44 ha-064080 crio[680]: time="2024-06-17 11:07:44.627368399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a562b9195d78591133b90abc121faa5dbf34feac5066f4f821669a5b8c27e85,PodSandboxId:32924073f320b5367b28757d06fe232b7af64ccf6539c044b32541c03c8b9cc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718622197449697447,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kubernetes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328,PodSandboxId:20be829b9ffef66a57eb936abd30f0a0daa6277806fc399919edde5c9193aa94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622049377736889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c,PodSandboxId:54a9c95a1ef70b178265a9c78e9dbcddfb9f8cb7ddc312e0e324a4f449b6ebc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622049372976570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9fa67df5a3f15517f0cc5493139c9ec692bbadbef748f1315698a8ae05601f,PodSandboxId:f9df57723b165a731e239a6ef5aa2bc8caad54a36061dfb7afcd1021c1962f8b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1718622049320124723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be33376c9348ffc6f1e2f31be21508d4aa16ebb1729b2780dabed95ba3ec9bbc,PodSandboxId:f67453c7d28830b38751fef3fd549d9fc1c2196b59ab402fdb76c2baae9174af,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718622047566527858,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb,PodSandboxId:78661140f722ccccbbef01859ed0a403a118690cd55dd92f4d2cf08d1c03af3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171862204
5688267141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24495c319c5c94afe6d0b59a3e9bc367b4539472e5846002db4fc1b802fac288,PodSandboxId:502e1e8fec2b89c90310f59069521c2fdde5e165e725bec6e1cbab4ef89951dd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17186220280
13380615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ffa31c75020c2c61ed38418bc6b9660,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf5516bbfc1d7ca0c4a0ebc2026888f4c7754891f8a6cfa30b49ea80c4c6a1b,PodSandboxId:5f8d58d694025bb9c7d62e4497344e57a4f85fbaaacc72882f259fd69bf8b688,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718622025755316625,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be01152b9ab18f70b88322e4262f33d332dd8aa951d6262c8ac130261de6479d,PodSandboxId:4b79ce1b27f110ccadaad87cef79c43a9db99fbaa28089b3617bf2d74bb5b811,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718622025707947555,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kubernetes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad,PodSandboxId:7293d250b3e0dd840434d7afd153d17ac7842ec4f356edd9bac3f40f6603de1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718622025699829962,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[string]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328,PodSandboxId:cb4974ce47c357bdbcfd6dd322289bd64cf2cbb3c4a7ad3e2ee523444ebfc04e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718622025592353826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=068c2f7d-373a-42e5-96fc-c33ebd97f238 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1a562b9195d78       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   32924073f320b       busybox-fc5497c4f-89r9v
	c3628888540ea       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   20be829b9ffef       coredns-7db6d8ff4d-xbhnm
	10061c1b3dd4f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   54a9c95a1ef70       coredns-7db6d8ff4d-zv99k
	bb9fa67df5a3f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   f9df57723b165       storage-provisioner
	be33376c9348f       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    6 minutes ago       Running             kindnet-cni               0                   f67453c7d2883       kindnet-48mb7
	8852bc2fd7b61       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      6 minutes ago       Running             kube-proxy                0                   78661140f722c       kube-proxy-dd48x
	24495c319c5c9       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   502e1e8fec2b8       kube-vip-ha-064080
	ddf5516bbfc1d       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago       Running             kube-controller-manager   0                   5f8d58d694025       kube-controller-manager-ha-064080
	be01152b9ab18       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago       Running             kube-apiserver            0                   4b79ce1b27f11       kube-apiserver-ha-064080
	ecbb08a618aa7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   7293d250b3e0d       etcd-ha-064080
	60cc5a9cf6621       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago       Running             kube-scheduler            0                   cb4974ce47c35       kube-scheduler-ha-064080
	
	
	==> coredns [10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c] <==
	[INFO] 10.244.1.2:54092 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000245278s
	[INFO] 10.244.1.2:44037 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000591531s
	[INFO] 10.244.1.2:60098 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001588047s
	[INFO] 10.244.1.2:43747 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095343s
	[INFO] 10.244.1.2:43363 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000301914s
	[INFO] 10.244.1.2:47475 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117378s
	[INFO] 10.244.2.2:50417 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002227444s
	[INFO] 10.244.2.2:60625 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001284466s
	[INFO] 10.244.2.2:49631 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063512s
	[INFO] 10.244.2.2:60462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075059s
	[INFO] 10.244.2.2:55188 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061001s
	[INFO] 10.244.0.4:44285 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114934s
	[INFO] 10.244.0.4:41654 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082437s
	[INFO] 10.244.1.2:41564 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167707s
	[INFO] 10.244.1.2:48527 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199996s
	[INFO] 10.244.1.2:54645 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101253s
	[INFO] 10.244.1.2:46137 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161774s
	[INFO] 10.244.2.2:47749 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123256s
	[INFO] 10.244.2.2:44797 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155611s
	[INFO] 10.244.0.4:57514 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013406s
	[INFO] 10.244.1.2:57226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001349s
	[INFO] 10.244.1.2:38456 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000150623s
	[INFO] 10.244.1.2:34565 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000206574s
	[INFO] 10.244.2.2:55350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181312s
	[INFO] 10.244.2.2:54665 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000284418s
	
	
	==> coredns [c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328] <==
	[INFO] 10.244.1.2:57521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000455867s
	[INFO] 10.244.1.2:34642 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001672898s
	[INFO] 10.244.1.2:55414 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000439082s
	[INFO] 10.244.1.2:35407 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001712046s
	[INFO] 10.244.2.2:35032 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000113662s
	[INFO] 10.244.2.2:41388 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000113624s
	[INFO] 10.244.0.4:54403 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009057413s
	[INFO] 10.244.0.4:55736 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00029139s
	[INFO] 10.244.0.4:56993 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168668s
	[INFO] 10.244.0.4:54854 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168204s
	[INFO] 10.244.1.2:39920 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000461115s
	[INFO] 10.244.1.2:59121 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005103552s
	[INFO] 10.244.2.2:33690 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260726s
	[INFO] 10.244.2.2:40819 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103621s
	[INFO] 10.244.2.2:47624 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000173244s
	[INFO] 10.244.0.4:45570 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101008s
	[INFO] 10.244.0.4:38238 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096216s
	[INFO] 10.244.2.2:47491 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144426s
	[INFO] 10.244.2.2:57595 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010924s
	[INFO] 10.244.0.4:37645 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011472s
	[INFO] 10.244.0.4:40937 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000173334s
	[INFO] 10.244.0.4:38240 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010406s
	[INFO] 10.244.1.2:51662 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000104731s
	[INFO] 10.244.2.2:33365 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139748s
	[INFO] 10.244.2.2:44022 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000178435s
	
	
	==> describe nodes <==
	Name:               ha-064080
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-064080
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=ha-064080
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T11_00_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:00:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-064080
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:07:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:03:35 +0000   Mon, 17 Jun 2024 11:00:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:03:35 +0000   Mon, 17 Jun 2024 11:00:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:03:35 +0000   Mon, 17 Jun 2024 11:00:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:03:35 +0000   Mon, 17 Jun 2024 11:00:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    ha-064080
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f526834e1094a1798c2f7e5de014d6a
	  System UUID:                6f526834-e109-4a17-98c2-f7e5de014d6a
	  Boot ID:                    7c18f343-1055-464d-948c-cec47020ebb1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-89r9v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 coredns-7db6d8ff4d-xbhnm             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m59s
	  kube-system                 coredns-7db6d8ff4d-zv99k             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m59s
	  kube-system                 etcd-ha-064080                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m12s
	  kube-system                 kindnet-48mb7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m59s
	  kube-system                 kube-apiserver-ha-064080             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 kube-controller-manager-ha-064080    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 kube-proxy-dd48x                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m59s
	  kube-system                 kube-scheduler-ha-064080             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 kube-vip-ha-064080                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m59s  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m20s  kubelet          Node ha-064080 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m12s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m12s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m12s  kubelet          Node ha-064080 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m12s  kubelet          Node ha-064080 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m12s  kubelet          Node ha-064080 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m     node-controller  Node ha-064080 event: Registered Node ha-064080 in Controller
	  Normal  NodeReady                6m56s  kubelet          Node ha-064080 status is now: NodeReady
	  Normal  RegisteredNode           5m44s  node-controller  Node ha-064080 event: Registered Node ha-064080 in Controller
	  Normal  RegisteredNode           4m35s  node-controller  Node ha-064080 event: Registered Node ha-064080 in Controller
	
	
	Name:               ha-064080-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-064080-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=ha-064080
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_17T11_01_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:01:41 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-064080-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:04:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 17 Jun 2024 11:03:43 +0000   Mon, 17 Jun 2024 11:04:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 17 Jun 2024 11:03:43 +0000   Mon, 17 Jun 2024 11:04:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 17 Jun 2024 11:03:43 +0000   Mon, 17 Jun 2024 11:04:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 17 Jun 2024 11:03:43 +0000   Mon, 17 Jun 2024 11:04:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-064080-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d22246006bf04dab820bccd210120c30
	  System UUID:                d2224600-6bf0-4dab-820b-ccd210120c30
	  Boot ID:                    096ef5df-247b-409d-8b96-8b6e8fade952
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gf9j7                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 etcd-ha-064080-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m
	  kube-system                 kindnet-7cqp4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m3s
	  kube-system                 kube-apiserver-ha-064080-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-controller-manager-ha-064080-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	  kube-system                 kube-proxy-l55dg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-scheduler-ha-064080-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  kube-system                 kube-vip-ha-064080-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m3s (x8 over 6m3s)  kubelet          Node ha-064080-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s (x8 over 6m3s)  kubelet          Node ha-064080-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s (x7 over 6m3s)  kubelet          Node ha-064080-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m                   node-controller  Node ha-064080-m02 event: Registered Node ha-064080-m02 in Controller
	  Normal  RegisteredNode           5m44s                node-controller  Node ha-064080-m02 event: Registered Node ha-064080-m02 in Controller
	  Normal  RegisteredNode           4m35s                node-controller  Node ha-064080-m02 event: Registered Node ha-064080-m02 in Controller
	  Normal  NodeNotReady             2m45s                node-controller  Node ha-064080-m02 status is now: NodeNotReady
	
	
	Name:               ha-064080-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-064080-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=ha-064080
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_17T11_02_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:02:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-064080-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:07:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:03:20 +0000   Mon, 17 Jun 2024 11:02:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:03:20 +0000   Mon, 17 Jun 2024 11:02:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:03:20 +0000   Mon, 17 Jun 2024 11:02:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:03:20 +0000   Mon, 17 Jun 2024 11:02:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.168
	  Hostname:    ha-064080-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 28a9e43ded0d41f5b6e29c37565b7ecd
	  System UUID:                28a9e43d-ed0d-41f5-b6e2-9c37565b7ecd
	  Boot ID:                    5bc25cc5-bd20-436d-b597-815c4183fd44
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wbcxx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-064080-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m53s
	  kube-system                 kindnet-5mg7w                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m55s
	  kube-system                 kube-apiserver-ha-064080-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-controller-manager-ha-064080-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-proxy-gsph4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-scheduler-ha-064080-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-vip-ha-064080-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m49s                  kube-proxy       
	  Normal  RegisteredNode           4m55s                  node-controller  Node ha-064080-m03 event: Registered Node ha-064080-m03 in Controller
	  Normal  Starting                 4m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m55s (x8 over 4m55s)  kubelet          Node ha-064080-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m55s (x8 over 4m55s)  kubelet          Node ha-064080-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m55s (x7 over 4m55s)  kubelet          Node ha-064080-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-064080-m03 event: Registered Node ha-064080-m03 in Controller
	  Normal  RegisteredNode           4m36s                  node-controller  Node ha-064080-m03 event: Registered Node ha-064080-m03 in Controller
	
	
	Name:               ha-064080-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-064080-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=ha-064080
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_17T11_03_52_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:03:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-064080-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:07:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:04:22 +0000   Mon, 17 Jun 2024 11:03:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:04:22 +0000   Mon, 17 Jun 2024 11:03:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:04:22 +0000   Mon, 17 Jun 2024 11:03:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:04:22 +0000   Mon, 17 Jun 2024 11:03:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.167
	  Hostname:    ha-064080-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 33fd5c3b11ee44e78fa203be011bc171
	  System UUID:                33fd5c3b-11ee-44e7-8fa2-03be011bc171
	  Boot ID:                    2f4f3a16-ace8-4d6c-84fb-de9f87bd3bc9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pn664       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m54s
	  kube-system                 kube-proxy-7t8b9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m54s (x2 over 3m54s)  kubelet          Node ha-064080-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x2 over 3m54s)  kubelet          Node ha-064080-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x2 over 3m54s)  kubelet          Node ha-064080-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-064080-m04 event: Registered Node ha-064080-m04 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-064080-m04 event: Registered Node ha-064080-m04 in Controller
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-064080-m04 event: Registered Node ha-064080-m04 in Controller
	  Normal  NodeReady                3m47s                  kubelet          Node ha-064080-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun17 10:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050897] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040396] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jun17 11:00] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.382779] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.620187] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.883696] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.059639] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052376] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.200032] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.124990] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.278932] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.114561] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.787636] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.060887] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.333258] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.080001] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.043226] kauditd_printk_skb: 18 callbacks suppressed
	[ +14.410422] kauditd_printk_skb: 72 callbacks suppressed
	
	
	==> etcd [ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad] <==
	{"level":"warn","ts":"2024-06-17T11:07:44.776934Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:44.837091Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:44.903237Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:44.910769Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:44.914303Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:44.925997Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:44.935782Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:44.937454Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:44.942085Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:44.945732Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:44.949434Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:44.960561Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:44.968739Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:44.97496Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:44.979002Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:44.983129Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:44.993196Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:44.998592Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:45.004992Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:45.008198Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:45.01162Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:45.01751Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:45.026477Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:45.033116Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-17T11:07:45.037801Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"52887eb9b9b3603c","from":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:07:45 up 7 min,  0 users,  load average: 0.11, 0.25, 0.15
	Linux ha-064080 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [be33376c9348ffc6f1e2f31be21508d4aa16ebb1729b2780dabed95ba3ec9bbc] <==
	I0617 11:07:08.887411       1 main.go:250] Node ha-064080-m04 has CIDR [10.244.3.0/24] 
	I0617 11:07:18.894001       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0617 11:07:18.894035       1 main.go:227] handling current node
	I0617 11:07:18.894057       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0617 11:07:18.894062       1 main.go:250] Node ha-064080-m02 has CIDR [10.244.1.0/24] 
	I0617 11:07:18.894174       1 main.go:223] Handling node with IPs: map[192.168.39.168:{}]
	I0617 11:07:18.894196       1 main.go:250] Node ha-064080-m03 has CIDR [10.244.2.0/24] 
	I0617 11:07:18.894256       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I0617 11:07:18.894278       1 main.go:250] Node ha-064080-m04 has CIDR [10.244.3.0/24] 
	I0617 11:07:28.903013       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0617 11:07:28.903178       1 main.go:227] handling current node
	I0617 11:07:28.903279       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0617 11:07:28.903302       1 main.go:250] Node ha-064080-m02 has CIDR [10.244.1.0/24] 
	I0617 11:07:28.903717       1 main.go:223] Handling node with IPs: map[192.168.39.168:{}]
	I0617 11:07:28.903749       1 main.go:250] Node ha-064080-m03 has CIDR [10.244.2.0/24] 
	I0617 11:07:28.903947       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I0617 11:07:28.904027       1 main.go:250] Node ha-064080-m04 has CIDR [10.244.3.0/24] 
	I0617 11:07:38.911624       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0617 11:07:38.911809       1 main.go:227] handling current node
	I0617 11:07:38.911919       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0617 11:07:38.911952       1 main.go:250] Node ha-064080-m02 has CIDR [10.244.1.0/24] 
	I0617 11:07:38.912112       1 main.go:223] Handling node with IPs: map[192.168.39.168:{}]
	I0617 11:07:38.912156       1 main.go:250] Node ha-064080-m03 has CIDR [10.244.2.0/24] 
	I0617 11:07:38.912253       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I0617 11:07:38.912282       1 main.go:250] Node ha-064080-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [be01152b9ab18f70b88322e4262f33d332dd8aa951d6262c8ac130261de6479d] <==
	I0617 11:00:30.680027       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0617 11:00:30.814793       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0617 11:00:30.826487       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.134]
	I0617 11:00:30.828692       1 controller.go:615] quota admission added evaluator for: endpoints
	I0617 11:00:30.835116       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0617 11:00:31.034420       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0617 11:00:31.973596       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0617 11:00:31.991917       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0617 11:00:32.152369       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0617 11:00:45.085731       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0617 11:00:45.139164       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0617 11:03:18.972773       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36002: use of closed network connection
	E0617 11:03:19.169073       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36020: use of closed network connection
	E0617 11:03:19.362615       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36040: use of closed network connection
	E0617 11:03:19.563329       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36048: use of closed network connection
	E0617 11:03:19.757426       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36064: use of closed network connection
	E0617 11:03:19.945092       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36080: use of closed network connection
	E0617 11:03:20.145130       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36096: use of closed network connection
	E0617 11:03:20.337642       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36120: use of closed network connection
	E0617 11:03:20.533153       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36130: use of closed network connection
	E0617 11:03:21.051574       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36176: use of closed network connection
	E0617 11:03:21.224412       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36196: use of closed network connection
	E0617 11:03:21.398178       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36202: use of closed network connection
	E0617 11:03:21.582316       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36230: use of closed network connection
	E0617 11:03:21.761691       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36250: use of closed network connection
	
	
	==> kube-controller-manager [ddf5516bbfc1d7ca0c4a0ebc2026888f4c7754891f8a6cfa30b49ea80c4c6a1b] <==
	I0617 11:01:41.607775       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-064080-m02\" does not exist"
	I0617 11:01:41.618447       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-064080-m02" podCIDRs=["10.244.1.0/24"]
	I0617 11:01:44.524682       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-064080-m02"
	I0617 11:02:50.140273       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-064080-m03\" does not exist"
	I0617 11:02:50.167343       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-064080-m03" podCIDRs=["10.244.2.0/24"]
	I0617 11:02:54.904612       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-064080-m03"
	I0617 11:03:15.989614       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.828346ms"
	I0617 11:03:16.080792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.016669ms"
	I0617 11:03:16.370445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="289.352101ms"
	E0617 11:03:16.370493       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0617 11:03:16.455715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.172164ms"
	I0617 11:03:16.455828       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.806µs"
	I0617 11:03:17.906035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.602032ms"
	I0617 11:03:17.906330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.937µs"
	I0617 11:03:18.386282       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.050248ms"
	I0617 11:03:18.386405       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.51µs"
	I0617 11:03:18.522300       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.759806ms"
	I0617 11:03:18.522420       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.514µs"
	I0617 11:03:51.707736       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-064080-m04\" does not exist"
	I0617 11:03:51.750452       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-064080-m04" podCIDRs=["10.244.3.0/24"]
	I0617 11:03:54.930421       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-064080-m04"
	I0617 11:03:58.795377       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-064080-m04"
	I0617 11:04:59.855604       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-064080-m04"
	I0617 11:04:59.909559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.648362ms"
	I0617 11:04:59.909950       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="240.965µs"
	
	
	==> kube-proxy [8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb] <==
	I0617 11:00:45.839974       1 server_linux.go:69] "Using iptables proxy"
	I0617 11:00:45.852134       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.134"]
	I0617 11:00:45.900351       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0617 11:00:45.900415       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0617 11:00:45.900431       1 server_linux.go:165] "Using iptables Proxier"
	I0617 11:00:45.903094       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 11:00:45.903378       1 server.go:872] "Version info" version="v1.30.1"
	I0617 11:00:45.903428       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:00:45.904665       1 config.go:192] "Starting service config controller"
	I0617 11:00:45.904719       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 11:00:45.904750       1 config.go:101] "Starting endpoint slice config controller"
	I0617 11:00:45.904754       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0617 11:00:45.905445       1 config.go:319] "Starting node config controller"
	I0617 11:00:45.905486       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 11:00:46.004818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0617 11:00:46.004906       1 shared_informer.go:320] Caches are synced for service config
	I0617 11:00:46.006354       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328] <==
	I0617 11:03:15.919486       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="9d6036a9-d1e4-4f26-b6e9-e2c4fcaedace" pod="default/busybox-fc5497c4f-gf9j7" assumedNode="ha-064080-m02" currentNode="ha-064080-m03"
	E0617 11:03:15.930425       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-gf9j7\": pod busybox-fc5497c4f-gf9j7 is already assigned to node \"ha-064080-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-gf9j7" node="ha-064080-m03"
	E0617 11:03:15.930608       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9d6036a9-d1e4-4f26-b6e9-e2c4fcaedace(default/busybox-fc5497c4f-gf9j7) was assumed on ha-064080-m03 but assigned to ha-064080-m02" pod="default/busybox-fc5497c4f-gf9j7"
	E0617 11:03:15.930681       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-gf9j7\": pod busybox-fc5497c4f-gf9j7 is already assigned to node \"ha-064080-m02\"" pod="default/busybox-fc5497c4f-gf9j7"
	I0617 11:03:15.930741       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-gf9j7" node="ha-064080-m02"
	E0617 11:03:15.991764       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wbcxx\": pod busybox-fc5497c4f-wbcxx is already assigned to node \"ha-064080-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-wbcxx" node="ha-064080-m03"
	E0617 11:03:15.991917       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod edfb4a4d-9e05-4cbe-b0d9-f7a8c675ebff(default/busybox-fc5497c4f-wbcxx) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-wbcxx"
	E0617 11:03:15.991940       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wbcxx\": pod busybox-fc5497c4f-wbcxx is already assigned to node \"ha-064080-m03\"" pod="default/busybox-fc5497c4f-wbcxx"
	I0617 11:03:15.991961       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-wbcxx" node="ha-064080-m03"
	E0617 11:03:15.999490       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-89r9v\": pod busybox-fc5497c4f-89r9v is already assigned to node \"ha-064080\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-89r9v" node="ha-064080"
	E0617 11:03:15.999654       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f1a8712a-2ef7-4400-98c9-5cee97c0d721(default/busybox-fc5497c4f-89r9v) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-89r9v"
	E0617 11:03:16.001941       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-89r9v\": pod busybox-fc5497c4f-89r9v is already assigned to node \"ha-064080\"" pod="default/busybox-fc5497c4f-89r9v"
	I0617 11:03:16.002324       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-89r9v" node="ha-064080"
	E0617 11:03:16.265788       1 schedule_one.go:1072] "Error occurred" err="Pod default/busybox-fc5497c4f-4trmp is already present in the active queue" pod="default/busybox-fc5497c4f-4trmp"
	E0617 11:03:51.820684       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-bsscf\": pod kube-proxy-bsscf is already assigned to node \"ha-064080-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-bsscf" node="ha-064080-m04"
	E0617 11:03:51.820962       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 75b1d3a6-9828-4735-960f-8a8a2be059fb(kube-system/kube-proxy-bsscf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-bsscf"
	E0617 11:03:51.821011       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-bsscf\": pod kube-proxy-bsscf is already assigned to node \"ha-064080-m04\"" pod="kube-system/kube-proxy-bsscf"
	I0617 11:03:51.821096       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-bsscf" node="ha-064080-m04"
	E0617 11:03:51.826037       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pn664\": pod kindnet-pn664 is already assigned to node \"ha-064080-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-pn664" node="ha-064080-m04"
	E0617 11:03:51.826114       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 10fd4a11-f59e-4bed-b0aa-3b7989ff4517(kube-system/kindnet-pn664) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-pn664"
	E0617 11:03:51.826132       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pn664\": pod kindnet-pn664 is already assigned to node \"ha-064080-m04\"" pod="kube-system/kindnet-pn664"
	I0617 11:03:51.826161       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-pn664" node="ha-064080-m04"
	E0617 11:03:51.875594       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5vzgd\": pod kindnet-5vzgd is already assigned to node \"ha-064080-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5vzgd" node="ha-064080-m04"
	E0617 11:03:51.875808       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5vzgd\": pod kindnet-5vzgd is already assigned to node \"ha-064080-m04\"" pod="kube-system/kindnet-5vzgd"
	I0617 11:03:51.876183       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5vzgd" node="ha-064080-m04"
	
	
	==> kubelet <==
	Jun 17 11:03:32 ha-064080 kubelet[1371]: E0617 11:03:32.162673    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 11:03:32 ha-064080 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 11:03:32 ha-064080 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 11:03:32 ha-064080 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:03:32 ha-064080 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 11:04:32 ha-064080 kubelet[1371]: E0617 11:04:32.163177    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 11:04:32 ha-064080 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 11:04:32 ha-064080 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 11:04:32 ha-064080 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:04:32 ha-064080 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 11:05:32 ha-064080 kubelet[1371]: E0617 11:05:32.161780    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 11:05:32 ha-064080 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 11:05:32 ha-064080 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 11:05:32 ha-064080 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:05:32 ha-064080 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 11:06:32 ha-064080 kubelet[1371]: E0617 11:06:32.161185    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 11:06:32 ha-064080 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 11:06:32 ha-064080 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 11:06:32 ha-064080 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:06:32 ha-064080 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 11:07:32 ha-064080 kubelet[1371]: E0617 11:07:32.160172    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 11:07:32 ha-064080 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 11:07:32 ha-064080 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 11:07:32 ha-064080 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:07:32 ha-064080 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-064080 -n ha-064080
helpers_test.go:261: (dbg) Run:  kubectl --context ha-064080 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (61.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (361.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-064080 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-064080 -v=7 --alsologtostderr
E0617 11:08:57.397211  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
E0617 11:09:25.083629  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-064080 -v=7 --alsologtostderr: exit status 82 (2m1.908851824s)

                                                
                                                
-- stdout --
	* Stopping node "ha-064080-m04"  ...
	* Stopping node "ha-064080-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:07:46.485659  136325 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:07:46.486109  136325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:07:46.486123  136325 out.go:304] Setting ErrFile to fd 2...
	I0617 11:07:46.486129  136325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:07:46.486404  136325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:07:46.486614  136325 out.go:298] Setting JSON to false
	I0617 11:07:46.486695  136325 mustload.go:65] Loading cluster: ha-064080
	I0617 11:07:46.487064  136325 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:07:46.487180  136325 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 11:07:46.487447  136325 mustload.go:65] Loading cluster: ha-064080
	I0617 11:07:46.487678  136325 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:07:46.487720  136325 stop.go:39] StopHost: ha-064080-m04
	I0617 11:07:46.488261  136325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:46.488315  136325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:46.503320  136325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38165
	I0617 11:07:46.503918  136325 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:46.504539  136325 main.go:141] libmachine: Using API Version  1
	I0617 11:07:46.504564  136325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:46.504886  136325 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:46.507336  136325 out.go:177] * Stopping node "ha-064080-m04"  ...
	I0617 11:07:46.508715  136325 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0617 11:07:46.508758  136325 main.go:141] libmachine: (ha-064080-m04) Calling .DriverName
	I0617 11:07:46.509006  136325 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0617 11:07:46.509031  136325 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHHostname
	I0617 11:07:46.512392  136325 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:46.512820  136325 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:03:36 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:07:46.512847  136325 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:07:46.512981  136325 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHPort
	I0617 11:07:46.513167  136325 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHKeyPath
	I0617 11:07:46.513470  136325 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHUsername
	I0617 11:07:46.513626  136325 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m04/id_rsa Username:docker}
	I0617 11:07:46.599905  136325 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0617 11:07:46.654023  136325 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0617 11:07:46.709962  136325 main.go:141] libmachine: Stopping "ha-064080-m04"...
	I0617 11:07:46.710003  136325 main.go:141] libmachine: (ha-064080-m04) Calling .GetState
	I0617 11:07:46.711515  136325 main.go:141] libmachine: (ha-064080-m04) Calling .Stop
	I0617 11:07:46.715551  136325 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 0/120
	I0617 11:07:47.941849  136325 main.go:141] libmachine: (ha-064080-m04) Calling .GetState
	I0617 11:07:47.943581  136325 main.go:141] libmachine: Machine "ha-064080-m04" was stopped.
	I0617 11:07:47.943599  136325 stop.go:75] duration metric: took 1.434901813s to stop
	I0617 11:07:47.943617  136325 stop.go:39] StopHost: ha-064080-m03
	I0617 11:07:47.943903  136325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:07:47.943940  136325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:07:47.958307  136325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45759
	I0617 11:07:47.958707  136325 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:07:47.959235  136325 main.go:141] libmachine: Using API Version  1
	I0617 11:07:47.959257  136325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:07:47.959633  136325 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:07:47.961810  136325 out.go:177] * Stopping node "ha-064080-m03"  ...
	I0617 11:07:47.963005  136325 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0617 11:07:47.963027  136325 main.go:141] libmachine: (ha-064080-m03) Calling .DriverName
	I0617 11:07:47.963254  136325 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0617 11:07:47.963298  136325 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHHostname
	I0617 11:07:47.966122  136325 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:47.966634  136325 main.go:141] libmachine: (ha-064080-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:31:82", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:02:14 +0000 UTC Type:0 Mac:52:54:00:97:31:82 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-064080-m03 Clientid:01:52:54:00:97:31:82}
	I0617 11:07:47.966665  136325 main.go:141] libmachine: (ha-064080-m03) DBG | domain ha-064080-m03 has defined IP address 192.168.39.168 and MAC address 52:54:00:97:31:82 in network mk-ha-064080
	I0617 11:07:47.966797  136325 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHPort
	I0617 11:07:47.966964  136325 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHKeyPath
	I0617 11:07:47.967111  136325 main.go:141] libmachine: (ha-064080-m03) Calling .GetSSHUsername
	I0617 11:07:47.967267  136325 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m03/id_rsa Username:docker}
	I0617 11:07:48.050562  136325 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0617 11:07:48.104572  136325 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0617 11:07:48.158705  136325 main.go:141] libmachine: Stopping "ha-064080-m03"...
	I0617 11:07:48.158732  136325 main.go:141] libmachine: (ha-064080-m03) Calling .GetState
	I0617 11:07:48.160245  136325 main.go:141] libmachine: (ha-064080-m03) Calling .Stop
	I0617 11:07:48.163601  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 0/120
	I0617 11:07:49.165780  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 1/120
	I0617 11:07:50.167291  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 2/120
	I0617 11:07:51.168415  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 3/120
	I0617 11:07:52.169779  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 4/120
	I0617 11:07:53.172064  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 5/120
	I0617 11:07:54.173920  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 6/120
	I0617 11:07:55.175444  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 7/120
	I0617 11:07:56.176898  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 8/120
	I0617 11:07:57.178488  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 9/120
	I0617 11:07:58.180343  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 10/120
	I0617 11:07:59.181806  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 11/120
	I0617 11:08:00.183526  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 12/120
	I0617 11:08:01.184975  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 13/120
	I0617 11:08:02.186342  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 14/120
	I0617 11:08:03.188175  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 15/120
	I0617 11:08:04.189642  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 16/120
	I0617 11:08:05.191047  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 17/120
	I0617 11:08:06.192721  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 18/120
	I0617 11:08:07.194180  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 19/120
	I0617 11:08:08.196004  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 20/120
	I0617 11:08:09.198221  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 21/120
	I0617 11:08:10.199535  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 22/120
	I0617 11:08:11.201102  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 23/120
	I0617 11:08:12.202238  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 24/120
	I0617 11:08:13.204467  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 25/120
	I0617 11:08:14.205989  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 26/120
	I0617 11:08:15.207838  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 27/120
	I0617 11:08:16.209204  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 28/120
	I0617 11:08:17.211071  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 29/120
	I0617 11:08:18.212512  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 30/120
	I0617 11:08:19.214065  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 31/120
	I0617 11:08:20.215817  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 32/120
	I0617 11:08:21.217126  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 33/120
	I0617 11:08:22.219106  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 34/120
	I0617 11:08:23.221073  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 35/120
	I0617 11:08:24.222613  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 36/120
	I0617 11:08:25.224045  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 37/120
	I0617 11:08:26.225981  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 38/120
	I0617 11:08:27.227371  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 39/120
	I0617 11:08:28.229110  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 40/120
	I0617 11:08:29.230475  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 41/120
	I0617 11:08:30.231746  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 42/120
	I0617 11:08:31.234001  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 43/120
	I0617 11:08:32.235371  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 44/120
	I0617 11:08:33.237036  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 45/120
	I0617 11:08:34.238351  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 46/120
	I0617 11:08:35.239761  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 47/120
	I0617 11:08:36.241096  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 48/120
	I0617 11:08:37.242304  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 49/120
	I0617 11:08:38.243711  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 50/120
	I0617 11:08:39.245083  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 51/120
	I0617 11:08:40.246477  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 52/120
	I0617 11:08:41.247922  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 53/120
	I0617 11:08:42.249163  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 54/120
	I0617 11:08:43.251103  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 55/120
	I0617 11:08:44.252538  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 56/120
	I0617 11:08:45.253961  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 57/120
	I0617 11:08:46.255415  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 58/120
	I0617 11:08:47.256854  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 59/120
	I0617 11:08:48.259055  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 60/120
	I0617 11:08:49.260653  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 61/120
	I0617 11:08:50.262372  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 62/120
	I0617 11:08:51.264175  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 63/120
	I0617 11:08:52.265594  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 64/120
	I0617 11:08:53.266985  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 65/120
	I0617 11:08:54.268497  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 66/120
	I0617 11:08:55.269825  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 67/120
	I0617 11:08:56.271293  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 68/120
	I0617 11:08:57.272728  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 69/120
	I0617 11:08:58.274592  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 70/120
	I0617 11:08:59.276065  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 71/120
	I0617 11:09:00.277388  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 72/120
	I0617 11:09:01.278850  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 73/120
	I0617 11:09:02.280250  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 74/120
	I0617 11:09:03.281951  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 75/120
	I0617 11:09:04.283326  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 76/120
	I0617 11:09:05.284636  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 77/120
	I0617 11:09:06.286063  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 78/120
	I0617 11:09:07.287568  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 79/120
	I0617 11:09:08.288912  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 80/120
	I0617 11:09:09.290320  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 81/120
	I0617 11:09:10.291592  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 82/120
	I0617 11:09:11.292908  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 83/120
	I0617 11:09:12.294212  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 84/120
	I0617 11:09:13.295576  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 85/120
	I0617 11:09:14.296947  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 86/120
	I0617 11:09:15.298560  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 87/120
	I0617 11:09:16.299962  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 88/120
	I0617 11:09:17.301206  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 89/120
	I0617 11:09:18.302951  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 90/120
	I0617 11:09:19.304355  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 91/120
	I0617 11:09:20.305817  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 92/120
	I0617 11:09:21.307265  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 93/120
	I0617 11:09:22.308537  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 94/120
	I0617 11:09:23.310333  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 95/120
	I0617 11:09:24.311621  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 96/120
	I0617 11:09:25.313070  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 97/120
	I0617 11:09:26.314312  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 98/120
	I0617 11:09:27.315564  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 99/120
	I0617 11:09:28.316874  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 100/120
	I0617 11:09:29.318240  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 101/120
	I0617 11:09:30.319801  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 102/120
	I0617 11:09:31.321165  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 103/120
	I0617 11:09:32.322579  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 104/120
	I0617 11:09:33.324344  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 105/120
	I0617 11:09:34.325647  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 106/120
	I0617 11:09:35.326979  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 107/120
	I0617 11:09:36.328524  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 108/120
	I0617 11:09:37.329775  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 109/120
	I0617 11:09:38.331251  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 110/120
	I0617 11:09:39.332654  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 111/120
	I0617 11:09:40.333946  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 112/120
	I0617 11:09:41.335435  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 113/120
	I0617 11:09:42.336773  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 114/120
	I0617 11:09:43.338680  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 115/120
	I0617 11:09:44.340179  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 116/120
	I0617 11:09:45.341502  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 117/120
	I0617 11:09:46.342847  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 118/120
	I0617 11:09:47.344258  136325 main.go:141] libmachine: (ha-064080-m03) Waiting for machine to stop 119/120
	I0617 11:09:48.345072  136325 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0617 11:09:48.345156  136325 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0617 11:09:48.347152  136325 out.go:177] 
	W0617 11:09:48.348533  136325 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0617 11:09:48.348553  136325 out.go:239] * 
	* 
	W0617 11:09:48.350758  136325 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 11:09:48.352026  136325 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-064080 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-064080 --wait=true -v=7 --alsologtostderr
E0617 11:11:51.169420  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 11:13:14.217981  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-064080 --wait=true -v=7 --alsologtostderr: (3m56.959839626s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-064080
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-064080 -n ha-064080
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-064080 logs -n 25: (1.739830665s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-064080 cp ha-064080-m03:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m02:/home/docker/cp-test_ha-064080-m03_ha-064080-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080-m02 sudo cat                                          | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m03_ha-064080-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m03:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04:/home/docker/cp-test_ha-064080-m03_ha-064080-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080-m04 sudo cat                                          | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m03_ha-064080-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-064080 cp testdata/cp-test.txt                                                | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4010822866/001/cp-test_ha-064080-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080:/home/docker/cp-test_ha-064080-m04_ha-064080.txt                       |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080 sudo cat                                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m04_ha-064080.txt                                 |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m02:/home/docker/cp-test_ha-064080-m04_ha-064080-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080-m02 sudo cat                                          | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m04_ha-064080-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m03:/home/docker/cp-test_ha-064080-m04_ha-064080-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080-m03 sudo cat                                          | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m04_ha-064080-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-064080 node stop m02 -v=7                                                     | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-064080 node start m02 -v=7                                                    | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-064080 -v=7                                                           | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-064080 -v=7                                                                | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-064080 --wait=true -v=7                                                    | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:09 UTC | 17 Jun 24 11:13 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-064080                                                                | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:13 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 11:09:48
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 11:09:48.398657  136825 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:09:48.398794  136825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:09:48.398806  136825 out.go:304] Setting ErrFile to fd 2...
	I0617 11:09:48.398812  136825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:09:48.398980  136825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:09:48.399493  136825 out.go:298] Setting JSON to false
	I0617 11:09:48.400491  136825 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":3135,"bootTime":1718619453,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:09:48.400554  136825 start.go:139] virtualization: kvm guest
	I0617 11:09:48.402812  136825 out.go:177] * [ha-064080] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:09:48.404007  136825 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:09:48.405218  136825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:09:48.404022  136825 notify.go:220] Checking for updates...
	I0617 11:09:48.407561  136825 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:09:48.408735  136825 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:09:48.409902  136825 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:09:48.411124  136825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:09:48.412831  136825 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:09:48.412921  136825 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:09:48.413381  136825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:09:48.413450  136825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:09:48.430477  136825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36189
	I0617 11:09:48.430912  136825 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:09:48.431504  136825 main.go:141] libmachine: Using API Version  1
	I0617 11:09:48.431527  136825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:09:48.431887  136825 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:09:48.432068  136825 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:09:48.465385  136825 out.go:177] * Using the kvm2 driver based on existing profile
	I0617 11:09:48.466712  136825 start.go:297] selected driver: kvm2
	I0617 11:09:48.466724  136825 start.go:901] validating driver "kvm2" against &{Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.167 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:09:48.466854  136825 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:09:48.467207  136825 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:09:48.467271  136825 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 11:09:48.481316  136825 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 11:09:48.482008  136825 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:09:48.482042  136825 cni.go:84] Creating CNI manager for ""
	I0617 11:09:48.482049  136825 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0617 11:09:48.482101  136825 start.go:340] cluster config:
	{Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.167 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:09:48.482206  136825 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:09:48.484702  136825 out.go:177] * Starting "ha-064080" primary control-plane node in "ha-064080" cluster
	I0617 11:09:48.485829  136825 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:09:48.485869  136825 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 11:09:48.485883  136825 cache.go:56] Caching tarball of preloaded images
	I0617 11:09:48.485972  136825 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:09:48.485984  136825 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 11:09:48.486110  136825 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 11:09:48.486312  136825 start.go:360] acquireMachinesLock for ha-064080: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:09:48.486365  136825 start.go:364] duration metric: took 34.608µs to acquireMachinesLock for "ha-064080"
	I0617 11:09:48.486385  136825 start.go:96] Skipping create...Using existing machine configuration
	I0617 11:09:48.486395  136825 fix.go:54] fixHost starting: 
	I0617 11:09:48.486685  136825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:09:48.486722  136825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:09:48.500157  136825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I0617 11:09:48.500566  136825 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:09:48.501073  136825 main.go:141] libmachine: Using API Version  1
	I0617 11:09:48.501096  136825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:09:48.501389  136825 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:09:48.501581  136825 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:09:48.501726  136825 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:09:48.503158  136825 fix.go:112] recreateIfNeeded on ha-064080: state=Running err=<nil>
	W0617 11:09:48.503176  136825 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 11:09:48.505070  136825 out.go:177] * Updating the running kvm2 "ha-064080" VM ...
	I0617 11:09:48.506472  136825 machine.go:94] provisionDockerMachine start ...
	I0617 11:09:48.506490  136825 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:09:48.506671  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:09:48.508895  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.509343  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:09:48.509382  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.509481  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:09:48.509638  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:09:48.509796  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:09:48.509930  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:09:48.510157  136825 main.go:141] libmachine: Using SSH client type: native
	I0617 11:09:48.510342  136825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:09:48.510354  136825 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 11:09:48.617516  136825 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-064080
	
	I0617 11:09:48.617546  136825 main.go:141] libmachine: (ha-064080) Calling .GetMachineName
	I0617 11:09:48.617856  136825 buildroot.go:166] provisioning hostname "ha-064080"
	I0617 11:09:48.617891  136825 main.go:141] libmachine: (ha-064080) Calling .GetMachineName
	I0617 11:09:48.618151  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:09:48.620987  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.621373  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:09:48.621414  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.621518  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:09:48.621688  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:09:48.621849  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:09:48.621994  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:09:48.622176  136825 main.go:141] libmachine: Using SSH client type: native
	I0617 11:09:48.622436  136825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:09:48.622456  136825 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-064080 && echo "ha-064080" | sudo tee /etc/hostname
	I0617 11:09:48.740060  136825 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-064080
	
	I0617 11:09:48.740086  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:09:48.742610  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.743014  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:09:48.743058  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.743211  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:09:48.743396  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:09:48.743602  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:09:48.743744  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:09:48.743937  136825 main.go:141] libmachine: Using SSH client type: native
	I0617 11:09:48.744126  136825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:09:48.744141  136825 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-064080' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-064080/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-064080' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 11:09:48.844529  136825 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:09:48.844563  136825 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 11:09:48.844587  136825 buildroot.go:174] setting up certificates
	I0617 11:09:48.844604  136825 provision.go:84] configureAuth start
	I0617 11:09:48.844616  136825 main.go:141] libmachine: (ha-064080) Calling .GetMachineName
	I0617 11:09:48.844960  136825 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:09:48.847691  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.848113  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:09:48.848146  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.848271  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:09:48.850240  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.850507  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:09:48.850536  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.850709  136825 provision.go:143] copyHostCerts
	I0617 11:09:48.850745  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:09:48.850804  136825 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 11:09:48.850813  136825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:09:48.850900  136825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 11:09:48.851032  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:09:48.851058  136825 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 11:09:48.851063  136825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:09:48.851097  136825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 11:09:48.851155  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:09:48.851171  136825 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 11:09:48.851178  136825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:09:48.851206  136825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 11:09:48.851264  136825 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.ha-064080 san=[127.0.0.1 192.168.39.134 ha-064080 localhost minikube]
	I0617 11:09:48.938016  136825 provision.go:177] copyRemoteCerts
	I0617 11:09:48.938070  136825 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 11:09:48.938092  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:09:48.940751  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.941153  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:09:48.941180  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.941376  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:09:48.941577  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:09:48.941758  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:09:48.941938  136825 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:09:49.021447  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0617 11:09:49.021514  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 11:09:49.047065  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0617 11:09:49.047145  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0617 11:09:49.076594  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0617 11:09:49.076672  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0617 11:09:49.102893  136825 provision.go:87] duration metric: took 258.274028ms to configureAuth
	I0617 11:09:49.102919  136825 buildroot.go:189] setting minikube options for container-runtime
	I0617 11:09:49.103127  136825 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:09:49.103194  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:09:49.105779  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:49.106191  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:09:49.106221  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:49.106394  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:09:49.106653  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:09:49.106864  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:09:49.107061  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:09:49.107255  136825 main.go:141] libmachine: Using SSH client type: native
	I0617 11:09:49.107425  136825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:09:49.107440  136825 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 11:11:19.898652  136825 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 11:11:19.898684  136825 machine.go:97] duration metric: took 1m31.392195992s to provisionDockerMachine
	I0617 11:11:19.898696  136825 start.go:293] postStartSetup for "ha-064080" (driver="kvm2")
	I0617 11:11:19.898709  136825 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 11:11:19.898735  136825 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:11:19.899122  136825 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 11:11:19.899162  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:11:19.902350  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:19.902763  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:11:19.902790  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:19.903012  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:11:19.903192  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:11:19.903362  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:11:19.903501  136825 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:11:19.982868  136825 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 11:11:19.987375  136825 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 11:11:19.987401  136825 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 11:11:19.987502  136825 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 11:11:19.987617  136825 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 11:11:19.987632  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /etc/ssl/certs/1201742.pem
	I0617 11:11:19.987747  136825 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 11:11:19.996600  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:11:20.020625  136825 start.go:296] duration metric: took 121.913621ms for postStartSetup
	I0617 11:11:20.020673  136825 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:11:20.020960  136825 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0617 11:11:20.020990  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:11:20.023687  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.024037  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:11:20.024065  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.024190  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:11:20.024367  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:11:20.024546  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:11:20.024667  136825 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	W0617 11:11:20.105446  136825 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0617 11:11:20.105477  136825 fix.go:56] duration metric: took 1m31.619083719s for fixHost
	I0617 11:11:20.105497  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:11:20.107862  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.108257  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:11:20.108282  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.108458  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:11:20.108626  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:11:20.108808  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:11:20.108937  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:11:20.109138  136825 main.go:141] libmachine: Using SSH client type: native
	I0617 11:11:20.109345  136825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:11:20.109368  136825 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 11:11:20.208456  136825 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718622680.169217752
	
	I0617 11:11:20.208478  136825 fix.go:216] guest clock: 1718622680.169217752
	I0617 11:11:20.208487  136825 fix.go:229] Guest: 2024-06-17 11:11:20.169217752 +0000 UTC Remote: 2024-06-17 11:11:20.10548393 +0000 UTC m=+91.742439711 (delta=63.733822ms)
	I0617 11:11:20.208513  136825 fix.go:200] guest clock delta is within tolerance: 63.733822ms
	I0617 11:11:20.208519  136825 start.go:83] releasing machines lock for "ha-064080", held for 1m31.722142757s
	I0617 11:11:20.208544  136825 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:11:20.208788  136825 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:11:20.211255  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.211662  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:11:20.211690  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.211823  136825 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:11:20.212481  136825 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:11:20.212678  136825 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:11:20.212781  136825 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 11:11:20.212839  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:11:20.212889  136825 ssh_runner.go:195] Run: cat /version.json
	I0617 11:11:20.212910  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:11:20.215256  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.215504  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.215695  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:11:20.215716  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.215863  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:11:20.215889  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.215895  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:11:20.216060  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:11:20.216080  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:11:20.216233  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:11:20.216265  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:11:20.216378  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:11:20.216371  136825 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:11:20.216501  136825 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:11:20.319560  136825 ssh_runner.go:195] Run: systemctl --version
	I0617 11:11:20.325852  136825 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 11:11:20.505163  136825 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 11:11:20.511134  136825 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 11:11:20.511188  136825 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 11:11:20.520615  136825 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0617 11:11:20.520633  136825 start.go:494] detecting cgroup driver to use...
	I0617 11:11:20.520681  136825 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 11:11:20.537682  136825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 11:11:20.551057  136825 docker.go:217] disabling cri-docker service (if available) ...
	I0617 11:11:20.551115  136825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 11:11:20.564293  136825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 11:11:20.577281  136825 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 11:11:20.726978  136825 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 11:11:20.876573  136825 docker.go:233] disabling docker service ...
	I0617 11:11:20.876636  136825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 11:11:20.895391  136825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 11:11:20.909474  136825 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 11:11:21.070275  136825 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 11:11:21.250399  136825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 11:11:21.264267  136825 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 11:11:21.282202  136825 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 11:11:21.282264  136825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:11:21.292901  136825 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 11:11:21.292949  136825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:11:21.303384  136825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:11:21.313602  136825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:11:21.323967  136825 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 11:11:21.334740  136825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:11:21.345022  136825 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:11:21.355648  136825 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:11:21.365991  136825 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 11:11:21.375675  136825 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 11:11:21.385536  136825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:11:21.528128  136825 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 11:11:23.210110  136825 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.681937282s)
	I0617 11:11:23.210142  136825 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 11:11:23.210186  136825 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 11:11:23.215553  136825 start.go:562] Will wait 60s for crictl version
	I0617 11:11:23.215608  136825 ssh_runner.go:195] Run: which crictl
	I0617 11:11:23.219705  136825 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 11:11:23.264419  136825 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 11:11:23.264503  136825 ssh_runner.go:195] Run: crio --version
	I0617 11:11:23.293111  136825 ssh_runner.go:195] Run: crio --version
	I0617 11:11:23.323271  136825 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 11:11:23.324552  136825 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:11:23.326833  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:23.327178  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:11:23.327203  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:23.327410  136825 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 11:11:23.332149  136825 kubeadm.go:877] updating cluster {Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.167 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 11:11:23.332289  136825 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:11:23.332325  136825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:11:23.380072  136825 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 11:11:23.380093  136825 crio.go:433] Images already preloaded, skipping extraction
	I0617 11:11:23.380138  136825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:11:23.414747  136825 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 11:11:23.414770  136825 cache_images.go:84] Images are preloaded, skipping loading
	I0617 11:11:23.414778  136825 kubeadm.go:928] updating node { 192.168.39.134 8443 v1.30.1 crio true true} ...
	I0617 11:11:23.414913  136825 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-064080 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 11:11:23.414977  136825 ssh_runner.go:195] Run: crio config
	I0617 11:11:23.463011  136825 cni.go:84] Creating CNI manager for ""
	I0617 11:11:23.463033  136825 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0617 11:11:23.463044  136825 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 11:11:23.463078  136825 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.134 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-064080 NodeName:ha-064080 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 11:11:23.463243  136825 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-064080"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 11:11:23.463277  136825 kube-vip.go:115] generating kube-vip config ...
	I0617 11:11:23.463327  136825 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0617 11:11:23.475045  136825 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0617 11:11:23.475167  136825 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0617 11:11:23.475230  136825 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 11:11:23.484739  136825 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 11:11:23.484793  136825 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0617 11:11:23.494187  136825 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0617 11:11:23.510953  136825 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 11:11:23.526999  136825 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0617 11:11:23.542733  136825 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0617 11:11:23.560265  136825 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0617 11:11:23.564017  136825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:11:23.705555  136825 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:11:23.719783  136825 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080 for IP: 192.168.39.134
	I0617 11:11:23.719803  136825 certs.go:194] generating shared ca certs ...
	I0617 11:11:23.719818  136825 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:11:23.719971  136825 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 11:11:23.720015  136825 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 11:11:23.720026  136825 certs.go:256] generating profile certs ...
	I0617 11:11:23.720103  136825 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.key
	I0617 11:11:23.720130  136825 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.3d13451a
	I0617 11:11:23.720142  136825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.3d13451a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.134 192.168.39.104 192.168.39.168 192.168.39.254]
	I0617 11:11:23.926959  136825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.3d13451a ...
	I0617 11:11:23.927002  136825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.3d13451a: {Name:mkf23db52fa0c37b45a16435638efd3e756c2a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:11:23.927191  136825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.3d13451a ...
	I0617 11:11:23.927210  136825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.3d13451a: {Name:mkf5edeefbcc11f117bfa6526f88a192808900e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:11:23.927301  136825 certs.go:381] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.3d13451a -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt
	I0617 11:11:23.927477  136825 certs.go:385] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.3d13451a -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key
	I0617 11:11:23.927629  136825 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key
	I0617 11:11:23.927653  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0617 11:11:23.927673  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0617 11:11:23.927687  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0617 11:11:23.927700  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0617 11:11:23.927710  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0617 11:11:23.927722  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0617 11:11:23.927739  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0617 11:11:23.927757  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0617 11:11:23.927825  136825 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 11:11:23.927889  136825 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 11:11:23.927906  136825 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 11:11:23.927936  136825 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 11:11:23.927971  136825 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 11:11:23.928002  136825 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 11:11:23.928254  136825 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:11:23.928324  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /usr/share/ca-certificates/1201742.pem
	I0617 11:11:23.928348  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:11:23.928362  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem -> /usr/share/ca-certificates/120174.pem
	I0617 11:11:23.928936  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 11:11:23.954131  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 11:11:23.976866  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 11:11:23.999265  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 11:11:24.023308  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0617 11:11:24.047549  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 11:11:24.071380  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 11:11:24.094788  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 11:11:24.118243  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 11:11:24.140799  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 11:11:24.165257  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 11:11:24.188154  136825 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 11:11:24.204889  136825 ssh_runner.go:195] Run: openssl version
	I0617 11:11:24.210760  136825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 11:11:24.221994  136825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 11:11:24.226407  136825 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 11:11:24.226454  136825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 11:11:24.232080  136825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 11:11:24.241593  136825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 11:11:24.252466  136825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:11:24.256853  136825 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:11:24.256913  136825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:11:24.262416  136825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 11:11:24.271844  136825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 11:11:24.282304  136825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 11:11:24.286591  136825 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 11:11:24.286628  136825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 11:11:24.292288  136825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 11:11:24.302081  136825 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 11:11:24.306822  136825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 11:11:24.312261  136825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 11:11:24.317671  136825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 11:11:24.323215  136825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 11:11:24.328591  136825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 11:11:24.333854  136825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 11:11:24.339230  136825 kubeadm.go:391] StartCluster: {Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.167 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:11:24.339343  136825 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 11:11:24.339405  136825 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 11:11:24.386312  136825 cri.go:89] found id: "13dfc1d97da1ebe900004edc6f66944d67700c68bd776eb13ec1978e93be17c2"
	I0617 11:11:24.386341  136825 cri.go:89] found id: "90b32e823ebff65269c551766dabf0cfadd610d5a60174b2cf6d05f71a5c3178"
	I0617 11:11:24.386347  136825 cri.go:89] found id: "657d9008773f965813c77834ba72f323f34991f4cc2084fd22cb4542a2b16b8c"
	I0617 11:11:24.386351  136825 cri.go:89] found id: "c160ee1bc36a7d933b526f7ada2eb852e6f7e39ca8b4a45842d978857dcabe69"
	I0617 11:11:24.386354  136825 cri.go:89] found id: "c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328"
	I0617 11:11:24.386359  136825 cri.go:89] found id: "10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c"
	I0617 11:11:24.386363  136825 cri.go:89] found id: "bb9fa67df5a3f15517f0cc5493139c9ec692bbadbef748f1315698a8ae05601f"
	I0617 11:11:24.386367  136825 cri.go:89] found id: "8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb"
	I0617 11:11:24.386371  136825 cri.go:89] found id: "24495c319c5c94afe6d0b59a3e9bc367b4539472e5846002db4fc1b802fac288"
	I0617 11:11:24.386380  136825 cri.go:89] found id: "ddf5516bbfc1d7ca0c4a0ebc2026888f4c7754891f8a6cfa30b49ea80c4c6a1b"
	I0617 11:11:24.386384  136825 cri.go:89] found id: "be01152b9ab18f70b88322e4262f33d332dd8aa951d6262c8ac130261de6479d"
	I0617 11:11:24.386389  136825 cri.go:89] found id: "ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad"
	I0617 11:11:24.386396  136825 cri.go:89] found id: "60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328"
	I0617 11:11:24.386400  136825 cri.go:89] found id: ""
	I0617 11:11:24.386452  136825 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.028472855Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=381a5eef-5ee3-4155-bc21-f6ae228b424f name=/runtime.v1.RuntimeService/Version
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.030274850Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48e7044d-c282-4b91-8c62-d52b1368e49e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.031017183Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718622826030989373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48e7044d-c282-4b91-8c62-d52b1368e49e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.031740363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6905208a-68b6-46f9-a681-1d45b1f3727f name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.031892958Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6905208a-68b6-46f9-a681-1d45b1f3727f name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.032423925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e968a7b99037fcd74cf96493f10b9e4b77571018045daa12bfa9faff24036da,PodSandboxId:c45cf10a39aca992e1f5aa28059659a69562166e059624924f451c30bc5f471d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718622771112810041,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f0bf1df40b97298fbc6f99a56b7f3d186bd75d4b0e97bffa9597b8c140f0fd,PodSandboxId:b48fa28479a6b2939fe045cf9861144e401584f195777c0c07873597a11f30f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718622768117408502,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea168e43c8f58b627333f8db1fcab727151d0170538dd365a0ff2c14a670bc63,PodSandboxId:50c5f620e07d97bc6144ac71edf1a67807c6842ce54f118f971940733bc57c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718622732111313128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kubernetes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cca27b47119ee9b81f6755dc162135ff2de0238a503b8d7d8cd565cc8ddcaa9,PodSandboxId:882a4867b4a9f2d5466eb06baf2539f28edbdaedfada0afe6ff83a0002c0b4a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718622730116363577,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de1cbf3c4abe51b334ea608a299a78e7432c29baa71f527ba9b0e80bc238e68,PodSandboxId:052b729b7698a17e4b1d8bc05ee4c1ad4bbaa5ecb7a38010e8567c72d58bd82b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718622723419643562,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kubernetes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5831ea6ee0c390e7ce915655860ab50d35ab3dd5fecf6fafbe17b03a4020ba0a,PodSandboxId:b48fa28479a6b2939fe045cf9861144e401584f195777c0c07873597a11f30f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718622722115437741,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a7f74758193741ac9405c51c090135a9aeeeaf838bb9952389a636257a739b1,PodSandboxId:ff17fb7580a6762426d9bec4e02efcd5b13bcef21bdb6fe8667300f333069ae3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718622705731418482,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 329ab0752894b263f1a7ed6bf158ec63,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:acee1942b213b3a339a1c92af2a70571f5c7f4b96158320c3bb8f3f74d86a0b2,PodSandboxId:e2f59a6f0a7e947778f7ada7cd976150f38cf96e757e387f28c1c17b68a66e6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718622690616082861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af9cf34
4f6b524475b47fa29673012301a355ef88398883d01606aee8cc859c,PodSandboxId:c45cf10a39aca992e1f5aa28059659a69562166e059624924f451c30bc5f471d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718622690194438452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35caf65c401c62e881ba25e31f1b5557a5c63db1d4d
4b79fb6d39ac686f2f793,PodSandboxId:cdaaced76b5679b4562f428a51ab37be2ca4a12572247e3130f0014d63ea3d28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622690204612897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14481314a9356f5bb099d6096ca03ef8ec9cb15637652261b4359c32f1cbceb,PodSandboxId:653510d585e1e22c3324c23f750efa6a5723329d64904d9d5d69af3a21d78ceb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622690245696256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbcac1da73105615cd555b19ec3b51e43dc6fd5ee233f83d19dcaa41a1b5ee,PodSandboxId:c26f503c4b22bb1c768452d6f133e61204cc91bd4832a832e736bf582e184777,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718622690027763794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603
afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9f362fab2deb3901ab9bb43f8da39d89a6b6ff1f7413040ba94079dba2f359,PodSandboxId:ed543d06a893e980fd5b345a82719e29c73e8f4fad280b46dd7e7ada6719a6dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718622690078883387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a32b0b77a472f149702c6af5025c8bce824feadd95de75493b9a7c7da94010a,PodSandboxId:882a4867b4a9f2d5466eb06baf2539f28edbdaedfada0afe6ff83a0002c0b4a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718622689888922989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[
string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9062f80f59bb01cd3d133ee66a6cf66b83b310d47589d9e9eeb07982548f74,PodSandboxId:50c5f620e07d97bc6144ac71edf1a67807c6842ce54f118f971940733bc57c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718622689922770068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kuber
netes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a562b9195d78591133b90abc121faa5dbf34feac5066f4f821669a5b8c27e85,PodSandboxId:32924073f320b5367b28757d06fe232b7af64ccf6539c044b32541c03c8b9cc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718622197449787025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kuberne
tes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328,PodSandboxId:20be829b9ffef66a57eb936abd30f0a0daa6277806fc399919edde5c9193aa94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718622049380205795,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c,PodSandboxId:54a9c95a1ef70b178265a9c78e9dbcddfb9f8cb7ddc312e0e324a4f449b6ebc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718622049373025796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb,PodSandboxId:78661140f722ccccbbef01859ed0a403a118690cd55dd92f4d2cf08d1c03af3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718622045688275770,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad,PodSandboxId:7293d250b3e0dd840434d7afd153d17ac7842ec4f356edd9bac3f40f6603de1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718622025701449939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[string]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328,PodSandboxId:cb4974ce47c357bdbcfd6dd322289bd64cf2cbb3c4a7ad3e2ee523444ebfc04e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718622025592715823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6905208a-68b6-46f9-a681-1d45b1f3727f name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.075417571Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e734425-89f9-4290-86f9-ce9ef5bc24b0 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.075504609Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e734425-89f9-4290-86f9-ce9ef5bc24b0 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.077140083Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9ec9d43d-5774-4b60-a969-ae3c4a77e6ed name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.077651543Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718622826077626019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ec9d43d-5774-4b60-a969-ae3c4a77e6ed name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.078142856Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9848231e-9965-4c5a-907f-a42de9dbbe71 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.078219083Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9848231e-9965-4c5a-907f-a42de9dbbe71 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.078625743Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e968a7b99037fcd74cf96493f10b9e4b77571018045daa12bfa9faff24036da,PodSandboxId:c45cf10a39aca992e1f5aa28059659a69562166e059624924f451c30bc5f471d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718622771112810041,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f0bf1df40b97298fbc6f99a56b7f3d186bd75d4b0e97bffa9597b8c140f0fd,PodSandboxId:b48fa28479a6b2939fe045cf9861144e401584f195777c0c07873597a11f30f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718622768117408502,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea168e43c8f58b627333f8db1fcab727151d0170538dd365a0ff2c14a670bc63,PodSandboxId:50c5f620e07d97bc6144ac71edf1a67807c6842ce54f118f971940733bc57c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718622732111313128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kubernetes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cca27b47119ee9b81f6755dc162135ff2de0238a503b8d7d8cd565cc8ddcaa9,PodSandboxId:882a4867b4a9f2d5466eb06baf2539f28edbdaedfada0afe6ff83a0002c0b4a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718622730116363577,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de1cbf3c4abe51b334ea608a299a78e7432c29baa71f527ba9b0e80bc238e68,PodSandboxId:052b729b7698a17e4b1d8bc05ee4c1ad4bbaa5ecb7a38010e8567c72d58bd82b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718622723419643562,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kubernetes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5831ea6ee0c390e7ce915655860ab50d35ab3dd5fecf6fafbe17b03a4020ba0a,PodSandboxId:b48fa28479a6b2939fe045cf9861144e401584f195777c0c07873597a11f30f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718622722115437741,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a7f74758193741ac9405c51c090135a9aeeeaf838bb9952389a636257a739b1,PodSandboxId:ff17fb7580a6762426d9bec4e02efcd5b13bcef21bdb6fe8667300f333069ae3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718622705731418482,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 329ab0752894b263f1a7ed6bf158ec63,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:acee1942b213b3a339a1c92af2a70571f5c7f4b96158320c3bb8f3f74d86a0b2,PodSandboxId:e2f59a6f0a7e947778f7ada7cd976150f38cf96e757e387f28c1c17b68a66e6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718622690616082861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af9cf34
4f6b524475b47fa29673012301a355ef88398883d01606aee8cc859c,PodSandboxId:c45cf10a39aca992e1f5aa28059659a69562166e059624924f451c30bc5f471d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718622690194438452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35caf65c401c62e881ba25e31f1b5557a5c63db1d4d
4b79fb6d39ac686f2f793,PodSandboxId:cdaaced76b5679b4562f428a51ab37be2ca4a12572247e3130f0014d63ea3d28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622690204612897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14481314a9356f5bb099d6096ca03ef8ec9cb15637652261b4359c32f1cbceb,PodSandboxId:653510d585e1e22c3324c23f750efa6a5723329d64904d9d5d69af3a21d78ceb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622690245696256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbcac1da73105615cd555b19ec3b51e43dc6fd5ee233f83d19dcaa41a1b5ee,PodSandboxId:c26f503c4b22bb1c768452d6f133e61204cc91bd4832a832e736bf582e184777,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718622690027763794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603
afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9f362fab2deb3901ab9bb43f8da39d89a6b6ff1f7413040ba94079dba2f359,PodSandboxId:ed543d06a893e980fd5b345a82719e29c73e8f4fad280b46dd7e7ada6719a6dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718622690078883387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a32b0b77a472f149702c6af5025c8bce824feadd95de75493b9a7c7da94010a,PodSandboxId:882a4867b4a9f2d5466eb06baf2539f28edbdaedfada0afe6ff83a0002c0b4a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718622689888922989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[
string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9062f80f59bb01cd3d133ee66a6cf66b83b310d47589d9e9eeb07982548f74,PodSandboxId:50c5f620e07d97bc6144ac71edf1a67807c6842ce54f118f971940733bc57c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718622689922770068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kuber
netes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a562b9195d78591133b90abc121faa5dbf34feac5066f4f821669a5b8c27e85,PodSandboxId:32924073f320b5367b28757d06fe232b7af64ccf6539c044b32541c03c8b9cc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718622197449787025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kuberne
tes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328,PodSandboxId:20be829b9ffef66a57eb936abd30f0a0daa6277806fc399919edde5c9193aa94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718622049380205795,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c,PodSandboxId:54a9c95a1ef70b178265a9c78e9dbcddfb9f8cb7ddc312e0e324a4f449b6ebc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718622049373025796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb,PodSandboxId:78661140f722ccccbbef01859ed0a403a118690cd55dd92f4d2cf08d1c03af3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718622045688275770,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad,PodSandboxId:7293d250b3e0dd840434d7afd153d17ac7842ec4f356edd9bac3f40f6603de1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718622025701449939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[string]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328,PodSandboxId:cb4974ce47c357bdbcfd6dd322289bd64cf2cbb3c4a7ad3e2ee523444ebfc04e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718622025592715823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9848231e-9965-4c5a-907f-a42de9dbbe71 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.100898770Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ced21e2-8a1c-455c-8146-b3320ebda31b name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.101326532Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:052b729b7698a17e4b1d8bc05ee4c1ad4bbaa5ecb7a38010e8567c72d58bd82b,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-89r9v,Uid:f1a8712a-2ef7-4400-98c9-5cee97c0d721,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718622723276220542,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-17T11:03:15.971800747Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ff17fb7580a6762426d9bec4e02efcd5b13bcef21bdb6fe8667300f333069ae3,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-064080,Uid:329ab0752894b263f1a7ed6bf158ec63,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1718622705621336656,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 329ab0752894b263f1a7ed6bf158ec63,},Annotations:map[string]string{kubernetes.io/config.hash: 329ab0752894b263f1a7ed6bf158ec63,kubernetes.io/config.seen: 2024-06-17T11:11:23.521061767Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:653510d585e1e22c3324c23f750efa6a5723329d64904d9d5d69af3a21d78ceb,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-zv99k,Uid:c2453fd4-894d-4212-bc48-1803e28ddba8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718622689619734089,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2453fd4-894d-4212-bc48-1803e28ddba8,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06
-17T11:00:48.793331041Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cdaaced76b5679b4562f428a51ab37be2ca4a12572247e3130f0014d63ea3d28,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-xbhnm,Uid:be37a6ec-2a49-4a56-b8a3-0da865edb05d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718622689613466488,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-17T11:00:48.782609053Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b48fa28479a6b2939fe045cf9861144e401584f195777c0c07873597a11f30f0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718622689520540185,Labels:map[string]string
{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/confi
g.seen: 2024-06-17T11:00:48.786777193Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c26f503c4b22bb1c768452d6f133e61204cc91bd4832a832e736bf582e184777,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-064080,Uid:99603afdeee0e2b8645e4cb7c5a1ed41,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718622689513404680,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603afdeee0e2b8645e4cb7c5a1ed41,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 99603afdeee0e2b8645e4cb7c5a1ed41,kubernetes.io/config.seen: 2024-06-17T11:00:32.054012900Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c45cf10a39aca992e1f5aa28059659a69562166e059624924f451c30bc5f471d,Metadata:&PodSandboxMetadata{Name:kindnet-48mb7,Uid:67422049-6637-4ca3-8bd1-2b47a265829d,Namespace:kube-system,Attempt:1,},State:SANDBOX_
READY,CreatedAt:1718622689512334533,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-17T11:00:45.172207542Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ed543d06a893e980fd5b345a82719e29c73e8f4fad280b46dd7e7ada6719a6dd,Metadata:&PodSandboxMetadata{Name:etcd-ha-064080,Uid:1ca5c8841cd25b2122df7e1cad8d883e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718622689511739243,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.ad
vertise-client-urls: https://192.168.39.134:2379,kubernetes.io/config.hash: 1ca5c8841cd25b2122df7e1cad8d883e,kubernetes.io/config.seen: 2024-06-17T11:00:32.054007521Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:882a4867b4a9f2d5466eb06baf2539f28edbdaedfada0afe6ff83a0002c0b4a3,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-064080,Uid:28a91621493b7895ffb468d74d39c887,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718622689511615373,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 28a91621493b7895ffb468d74d39c887,kubernetes.io/config.seen: 2024-06-17T11:00:32.054012001Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:50c5f620e07d97bc6144ac71edf1a67807c6842ce54f11
8f971940733bc57c79,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-064080,Uid:21807c08d0f93f57866ad62dca0e176d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718622689507680261,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.134:8443,kubernetes.io/config.hash: 21807c08d0f93f57866ad62dca0e176d,kubernetes.io/config.seen: 2024-06-17T11:00:32.054010647Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e2f59a6f0a7e947778f7ada7cd976150f38cf96e757e387f28c1c17b68a66e6d,Metadata:&PodSandboxMetadata{Name:kube-proxy-dd48x,Uid:e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718622689500958038,Labels:map[string]string{cont
roller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-17T11:00:45.184077289Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4ced21e2-8a1c-455c-8146-b3320ebda31b name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.103573050Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=603c9f21-0d47-41a7-8058-8a3dcb38ff06 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.103662307Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=603c9f21-0d47-41a7-8058-8a3dcb38ff06 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.104810950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e968a7b99037fcd74cf96493f10b9e4b77571018045daa12bfa9faff24036da,PodSandboxId:c45cf10a39aca992e1f5aa28059659a69562166e059624924f451c30bc5f471d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718622771112810041,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f0bf1df40b97298fbc6f99a56b7f3d186bd75d4b0e97bffa9597b8c140f0fd,PodSandboxId:b48fa28479a6b2939fe045cf9861144e401584f195777c0c07873597a11f30f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718622768117408502,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea168e43c8f58b627333f8db1fcab727151d0170538dd365a0ff2c14a670bc63,PodSandboxId:50c5f620e07d97bc6144ac71edf1a67807c6842ce54f118f971940733bc57c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718622732111313128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kubernetes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cca27b47119ee9b81f6755dc162135ff2de0238a503b8d7d8cd565cc8ddcaa9,PodSandboxId:882a4867b4a9f2d5466eb06baf2539f28edbdaedfada0afe6ff83a0002c0b4a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718622730116363577,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de1cbf3c4abe51b334ea608a299a78e7432c29baa71f527ba9b0e80bc238e68,PodSandboxId:052b729b7698a17e4b1d8bc05ee4c1ad4bbaa5ecb7a38010e8567c72d58bd82b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718622723419643562,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kubernetes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a7f74758193741ac9405c51c090135a9aeeeaf838bb9952389a636257a739b1,PodSandboxId:ff17fb7580a6762426d9bec4e02efcd5b13bcef21bdb6fe8667300f333069ae3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718622705731418482,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 329ab0752894b263f1a7ed6bf158ec63,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:acee1942b213b3a339a1c92af2a70571f5c7f4b96158320c3bb8f3f74d86a0b2,PodSandboxId:e2f59a6f0a7e947778f7ada7cd976150f38cf96e757e387f28c1c17b68a66e6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718622690616082861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:35caf65c401c62e881ba25e31f1b5557a5c63db1d4d4b79fb6d39ac686f2f793,PodSandboxId:cdaaced76b5679b4562f428a51ab37be2ca4a12572247e3130f0014d63ea3d28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622690204612897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14481314a9356f5bb099d6096ca03ef8ec9cb15637652261b4359c32f1cbceb,PodSandboxId:653510d585e1e22c3324c23f750efa6a5723329d64904d9d5d69af3a21d78ceb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622690245696256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbcac1da73105615cd555b19ec3b51e43dc6fd5ee233f83d19dcaa41a1b5ee,PodSandboxId:c26f503c4b22bb1c768452d6f133e61204cc91bd4832a832e736bf582e184777,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718622690027763794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-064080,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 99603afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9f362fab2deb3901ab9bb43f8da39d89a6b6ff1f7413040ba94079dba2f359,PodSandboxId:ed543d06a893e980fd5b345a82719e29c73e8f4fad280b46dd7e7ada6719a6dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718622690078883387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c884
1cd25b2122df7e1cad8d883e,},Annotations:map[string]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=603c9f21-0d47-41a7-8058-8a3dcb38ff06 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.136432673Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f490e9a1-0884-4c31-a344-c41fc46aead7 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.136496894Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f490e9a1-0884-4c31-a344-c41fc46aead7 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.137560425Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=889cd4d2-18fe-40f3-9a79-41e438cfc968 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.138078707Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718622826138055558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=889cd4d2-18fe-40f3-9a79-41e438cfc968 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.138599821Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d38e8f4c-1bfe-4e3f-bb3c-eb62e0e8c2c5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.138652336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d38e8f4c-1bfe-4e3f-bb3c-eb62e0e8c2c5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:13:46 ha-064080 crio[3784]: time="2024-06-17 11:13:46.139080565Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e968a7b99037fcd74cf96493f10b9e4b77571018045daa12bfa9faff24036da,PodSandboxId:c45cf10a39aca992e1f5aa28059659a69562166e059624924f451c30bc5f471d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718622771112810041,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f0bf1df40b97298fbc6f99a56b7f3d186bd75d4b0e97bffa9597b8c140f0fd,PodSandboxId:b48fa28479a6b2939fe045cf9861144e401584f195777c0c07873597a11f30f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718622768117408502,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea168e43c8f58b627333f8db1fcab727151d0170538dd365a0ff2c14a670bc63,PodSandboxId:50c5f620e07d97bc6144ac71edf1a67807c6842ce54f118f971940733bc57c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718622732111313128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kubernetes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cca27b47119ee9b81f6755dc162135ff2de0238a503b8d7d8cd565cc8ddcaa9,PodSandboxId:882a4867b4a9f2d5466eb06baf2539f28edbdaedfada0afe6ff83a0002c0b4a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718622730116363577,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de1cbf3c4abe51b334ea608a299a78e7432c29baa71f527ba9b0e80bc238e68,PodSandboxId:052b729b7698a17e4b1d8bc05ee4c1ad4bbaa5ecb7a38010e8567c72d58bd82b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718622723419643562,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kubernetes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5831ea6ee0c390e7ce915655860ab50d35ab3dd5fecf6fafbe17b03a4020ba0a,PodSandboxId:b48fa28479a6b2939fe045cf9861144e401584f195777c0c07873597a11f30f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718622722115437741,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a7f74758193741ac9405c51c090135a9aeeeaf838bb9952389a636257a739b1,PodSandboxId:ff17fb7580a6762426d9bec4e02efcd5b13bcef21bdb6fe8667300f333069ae3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718622705731418482,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 329ab0752894b263f1a7ed6bf158ec63,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:acee1942b213b3a339a1c92af2a70571f5c7f4b96158320c3bb8f3f74d86a0b2,PodSandboxId:e2f59a6f0a7e947778f7ada7cd976150f38cf96e757e387f28c1c17b68a66e6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718622690616082861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af9cf34
4f6b524475b47fa29673012301a355ef88398883d01606aee8cc859c,PodSandboxId:c45cf10a39aca992e1f5aa28059659a69562166e059624924f451c30bc5f471d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718622690194438452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35caf65c401c62e881ba25e31f1b5557a5c63db1d4d
4b79fb6d39ac686f2f793,PodSandboxId:cdaaced76b5679b4562f428a51ab37be2ca4a12572247e3130f0014d63ea3d28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622690204612897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14481314a9356f5bb099d6096ca03ef8ec9cb15637652261b4359c32f1cbceb,PodSandboxId:653510d585e1e22c3324c23f750efa6a5723329d64904d9d5d69af3a21d78ceb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622690245696256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbcac1da73105615cd555b19ec3b51e43dc6fd5ee233f83d19dcaa41a1b5ee,PodSandboxId:c26f503c4b22bb1c768452d6f133e61204cc91bd4832a832e736bf582e184777,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718622690027763794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603
afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9f362fab2deb3901ab9bb43f8da39d89a6b6ff1f7413040ba94079dba2f359,PodSandboxId:ed543d06a893e980fd5b345a82719e29c73e8f4fad280b46dd7e7ada6719a6dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718622690078883387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a32b0b77a472f149702c6af5025c8bce824feadd95de75493b9a7c7da94010a,PodSandboxId:882a4867b4a9f2d5466eb06baf2539f28edbdaedfada0afe6ff83a0002c0b4a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718622689888922989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[
string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9062f80f59bb01cd3d133ee66a6cf66b83b310d47589d9e9eeb07982548f74,PodSandboxId:50c5f620e07d97bc6144ac71edf1a67807c6842ce54f118f971940733bc57c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718622689922770068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kuber
netes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a562b9195d78591133b90abc121faa5dbf34feac5066f4f821669a5b8c27e85,PodSandboxId:32924073f320b5367b28757d06fe232b7af64ccf6539c044b32541c03c8b9cc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718622197449787025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kuberne
tes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328,PodSandboxId:20be829b9ffef66a57eb936abd30f0a0daa6277806fc399919edde5c9193aa94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718622049380205795,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c,PodSandboxId:54a9c95a1ef70b178265a9c78e9dbcddfb9f8cb7ddc312e0e324a4f449b6ebc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718622049373025796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb,PodSandboxId:78661140f722ccccbbef01859ed0a403a118690cd55dd92f4d2cf08d1c03af3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718622045688275770,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad,PodSandboxId:7293d250b3e0dd840434d7afd153d17ac7842ec4f356edd9bac3f40f6603de1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718622025701449939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[string]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328,PodSandboxId:cb4974ce47c357bdbcfd6dd322289bd64cf2cbb3c4a7ad3e2ee523444ebfc04e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718622025592715823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d38e8f4c-1bfe-4e3f-bb3c-eb62e0e8c2c5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7e968a7b99037       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      55 seconds ago       Running             kindnet-cni               3                   c45cf10a39aca       kindnet-48mb7
	38f0bf1df40b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      58 seconds ago       Running             storage-provisioner       4                   b48fa28479a6b       storage-provisioner
	ea168e43c8f58       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      About a minute ago   Running             kube-apiserver            3                   50c5f620e07d9       kube-apiserver-ha-064080
	9cca27b47119e       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      About a minute ago   Running             kube-controller-manager   2                   882a4867b4a9f       kube-controller-manager-ha-064080
	1de1cbf3c4abe       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   052b729b7698a       busybox-fc5497c4f-89r9v
	5831ea6ee0c39       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   b48fa28479a6b       storage-provisioner
	8a7f747581937       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   ff17fb7580a67       kube-vip-ha-064080
	acee1942b213b       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      2 minutes ago        Running             kube-proxy                1                   e2f59a6f0a7e9       kube-proxy-dd48x
	d14481314a935       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   653510d585e1e       coredns-7db6d8ff4d-zv99k
	35caf65c401c6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   cdaaced76b567       coredns-7db6d8ff4d-xbhnm
	4af9cf344f6b5       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      2 minutes ago        Exited              kindnet-cni               2                   c45cf10a39aca       kindnet-48mb7
	4c9f362fab2de       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   ed543d06a893e       etcd-ha-064080
	88dbcac1da731       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      2 minutes ago        Running             kube-scheduler            1                   c26f503c4b22b       kube-scheduler-ha-064080
	7e9062f80f59b       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      2 minutes ago        Exited              kube-apiserver            2                   50c5f620e07d9       kube-apiserver-ha-064080
	9a32b0b77a472       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      2 minutes ago        Exited              kube-controller-manager   1                   882a4867b4a9f       kube-controller-manager-ha-064080
	1a562b9195d78       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   32924073f320b       busybox-fc5497c4f-89r9v
	c3628888540ea       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   20be829b9ffef       coredns-7db6d8ff4d-xbhnm
	10061c1b3dd4f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   54a9c95a1ef70       coredns-7db6d8ff4d-zv99k
	8852bc2fd7b61       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      13 minutes ago       Exited              kube-proxy                0                   78661140f722c       kube-proxy-dd48x
	ecbb08a618aa7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   7293d250b3e0d       etcd-ha-064080
	60cc5a9cf6621       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      13 minutes ago       Exited              kube-scheduler            0                   cb4974ce47c35       kube-scheduler-ha-064080
	
	
	==> coredns [10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c] <==
	[INFO] 10.244.1.2:47475 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117378s
	[INFO] 10.244.2.2:50417 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002227444s
	[INFO] 10.244.2.2:60625 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001284466s
	[INFO] 10.244.2.2:49631 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063512s
	[INFO] 10.244.2.2:60462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075059s
	[INFO] 10.244.2.2:55188 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061001s
	[INFO] 10.244.0.4:44285 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114934s
	[INFO] 10.244.0.4:41654 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082437s
	[INFO] 10.244.1.2:41564 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167707s
	[INFO] 10.244.1.2:48527 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199996s
	[INFO] 10.244.1.2:54645 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101253s
	[INFO] 10.244.1.2:46137 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161774s
	[INFO] 10.244.2.2:47749 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123256s
	[INFO] 10.244.2.2:44797 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155611s
	[INFO] 10.244.0.4:57514 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013406s
	[INFO] 10.244.1.2:57226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001349s
	[INFO] 10.244.1.2:38456 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000150623s
	[INFO] 10.244.1.2:34565 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000206574s
	[INFO] 10.244.2.2:55350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181312s
	[INFO] 10.244.2.2:54665 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000284418s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [35caf65c401c62e881ba25e31f1b5557a5c63db1d4d4b79fb6d39ac686f2f793] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:46720->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:46720->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:46742->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:46742->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328] <==
	[INFO] 10.244.1.2:59121 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005103552s
	[INFO] 10.244.2.2:33690 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260726s
	[INFO] 10.244.2.2:40819 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103621s
	[INFO] 10.244.2.2:47624 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000173244s
	[INFO] 10.244.0.4:45570 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101008s
	[INFO] 10.244.0.4:38238 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096216s
	[INFO] 10.244.2.2:47491 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144426s
	[INFO] 10.244.2.2:57595 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010924s
	[INFO] 10.244.0.4:37645 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011472s
	[INFO] 10.244.0.4:40937 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000173334s
	[INFO] 10.244.0.4:38240 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010406s
	[INFO] 10.244.1.2:51662 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000104731s
	[INFO] 10.244.2.2:33365 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139748s
	[INFO] 10.244.2.2:44022 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000178435s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1822&timeout=5m20s&timeoutSeconds=320&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1825&timeout=7m54s&timeoutSeconds=474&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1826&timeout=8m38s&timeoutSeconds=518&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1825": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1825": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1822": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1822": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1826": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1826": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d14481314a9356f5bb099d6096ca03ef8ec9cb15637652261b4359c32f1cbceb] <==
	Trace[1154199320]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (11:11:49.136)
	Trace[1154199320]: [10.001056316s] [10.001056316s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1889676882]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jun-2024 11:11:39.574) (total time: 10001ms):
	Trace[1889676882]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:11:49.575)
	Trace[1889676882]: [10.001884902s] [10.001884902s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:54900->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:54900->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-064080
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-064080
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=ha-064080
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T11_00_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:00:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-064080
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:13:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:12:16 +0000   Mon, 17 Jun 2024 11:00:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:12:16 +0000   Mon, 17 Jun 2024 11:00:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:12:16 +0000   Mon, 17 Jun 2024 11:00:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:12:16 +0000   Mon, 17 Jun 2024 11:00:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    ha-064080
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f526834e1094a1798c2f7e5de014d6a
	  System UUID:                6f526834-e109-4a17-98c2-f7e5de014d6a
	  Boot ID:                    7c18f343-1055-464d-948c-cec47020ebb1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-89r9v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-xbhnm             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-zv99k             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-064080                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-48mb7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-064080             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-064080    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-dd48x                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-064080             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-064080                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 93s    kube-proxy       
	  Normal   Starting                 13m    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m    kubelet          Node ha-064080 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m    kubelet          Node ha-064080 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m    kubelet          Node ha-064080 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m    kubelet          Node ha-064080 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m    node-controller  Node ha-064080 event: Registered Node ha-064080 in Controller
	  Normal   NodeReady                12m    kubelet          Node ha-064080 status is now: NodeReady
	  Normal   RegisteredNode           11m    node-controller  Node ha-064080 event: Registered Node ha-064080 in Controller
	  Normal   RegisteredNode           10m    node-controller  Node ha-064080 event: Registered Node ha-064080 in Controller
	  Warning  ContainerGCFailed        3m14s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           86s    node-controller  Node ha-064080 event: Registered Node ha-064080 in Controller
	  Normal   RegisteredNode           80s    node-controller  Node ha-064080 event: Registered Node ha-064080 in Controller
	  Normal   RegisteredNode           28s    node-controller  Node ha-064080 event: Registered Node ha-064080 in Controller
	
	
	Name:               ha-064080-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-064080-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=ha-064080
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_17T11_01_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:01:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-064080-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:13:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:12:56 +0000   Mon, 17 Jun 2024 11:12:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:12:56 +0000   Mon, 17 Jun 2024 11:12:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:12:56 +0000   Mon, 17 Jun 2024 11:12:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:12:56 +0000   Mon, 17 Jun 2024 11:12:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-064080-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d22246006bf04dab820bccd210120c30
	  System UUID:                d2224600-6bf0-4dab-820b-ccd210120c30
	  Boot ID:                    aa828dd5-a30a-4797-914e-527263ce7397
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gf9j7                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-064080-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-7cqp4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-064080-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-064080-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-l55dg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-064080-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-064080-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 77s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-064080-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-064080-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-064080-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-064080-m02 event: Registered Node ha-064080-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-064080-m02 event: Registered Node ha-064080-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-064080-m02 event: Registered Node ha-064080-m02 in Controller
	  Normal  NodeNotReady             8m47s                node-controller  Node ha-064080-m02 status is now: NodeNotReady
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  119s (x8 over 119s)  kubelet          Node ha-064080-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s (x8 over 119s)  kubelet          Node ha-064080-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s (x7 over 119s)  kubelet          Node ha-064080-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  119s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           86s                  node-controller  Node ha-064080-m02 event: Registered Node ha-064080-m02 in Controller
	  Normal  RegisteredNode           80s                  node-controller  Node ha-064080-m02 event: Registered Node ha-064080-m02 in Controller
	  Normal  RegisteredNode           28s                  node-controller  Node ha-064080-m02 event: Registered Node ha-064080-m02 in Controller
	
	
	Name:               ha-064080-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-064080-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=ha-064080
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_17T11_02_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:02:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-064080-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:13:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:13:20 +0000   Mon, 17 Jun 2024 11:02:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:13:20 +0000   Mon, 17 Jun 2024 11:02:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:13:20 +0000   Mon, 17 Jun 2024 11:02:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:13:20 +0000   Mon, 17 Jun 2024 11:02:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.168
	  Hostname:    ha-064080-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 28a9e43ded0d41f5b6e29c37565b7ecd
	  System UUID:                28a9e43d-ed0d-41f5-b6e2-9c37565b7ecd
	  Boot ID:                    6398f366-6e6d-4ff2-96d3-ecc11ffbecb5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wbcxx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-064080-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-5mg7w                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-064080-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-064080-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-gsph4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-064080-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-064080-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 39s                kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-064080-m03 event: Registered Node ha-064080-m03 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-064080-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-064080-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-064080-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-064080-m03 event: Registered Node ha-064080-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-064080-m03 event: Registered Node ha-064080-m03 in Controller
	  Normal   RegisteredNode           86s                node-controller  Node ha-064080-m03 event: Registered Node ha-064080-m03 in Controller
	  Normal   RegisteredNode           80s                node-controller  Node ha-064080-m03 event: Registered Node ha-064080-m03 in Controller
	  Normal   Starting                 56s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  56s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  56s                kubelet          Node ha-064080-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    56s                kubelet          Node ha-064080-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s                kubelet          Node ha-064080-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 56s                kubelet          Node ha-064080-m03 has been rebooted, boot id: 6398f366-6e6d-4ff2-96d3-ecc11ffbecb5
	  Normal   RegisteredNode           28s                node-controller  Node ha-064080-m03 event: Registered Node ha-064080-m03 in Controller
	
	
	Name:               ha-064080-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-064080-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=ha-064080
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_17T11_03_52_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:03:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-064080-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:13:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:13:37 +0000   Mon, 17 Jun 2024 11:13:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:13:37 +0000   Mon, 17 Jun 2024 11:13:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:13:37 +0000   Mon, 17 Jun 2024 11:13:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:13:37 +0000   Mon, 17 Jun 2024 11:13:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.167
	  Hostname:    ha-064080-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 33fd5c3b11ee44e78fa203be011bc171
	  System UUID:                33fd5c3b-11ee-44e7-8fa2-03be011bc171
	  Boot ID:                    925277b4-c368-4ccd-aabf-2f2c6c24a726
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pn664       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m55s
	  kube-system                 kube-proxy-7t8b9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5s                     kube-proxy       
	  Normal   Starting                 9m50s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m55s (x2 over 9m55s)  kubelet          Node ha-064080-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m55s (x2 over 9m55s)  kubelet          Node ha-064080-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m55s (x2 over 9m55s)  kubelet          Node ha-064080-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m52s                  node-controller  Node ha-064080-m04 event: Registered Node ha-064080-m04 in Controller
	  Normal   RegisteredNode           9m52s                  node-controller  Node ha-064080-m04 event: Registered Node ha-064080-m04 in Controller
	  Normal   RegisteredNode           9m51s                  node-controller  Node ha-064080-m04 event: Registered Node ha-064080-m04 in Controller
	  Normal   NodeReady                9m48s                  kubelet          Node ha-064080-m04 status is now: NodeReady
	  Normal   RegisteredNode           86s                    node-controller  Node ha-064080-m04 event: Registered Node ha-064080-m04 in Controller
	  Normal   RegisteredNode           80s                    node-controller  Node ha-064080-m04 event: Registered Node ha-064080-m04 in Controller
	  Normal   NodeNotReady             46s                    node-controller  Node ha-064080-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           28s                    node-controller  Node ha-064080-m04 event: Registered Node ha-064080-m04 in Controller
	  Normal   Starting                 9s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                     kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s                     kubelet          Node ha-064080-m04 has been rebooted, boot id: 925277b4-c368-4ccd-aabf-2f2c6c24a726
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)        kubelet          Node ha-064080-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)        kubelet          Node ha-064080-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)        kubelet          Node ha-064080-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                9s                     kubelet          Node ha-064080-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.883696] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.059639] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052376] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.200032] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.124990] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.278932] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.114561] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.787636] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.060887] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.333258] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.080001] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.043226] kauditd_printk_skb: 18 callbacks suppressed
	[ +14.410422] kauditd_printk_skb: 72 callbacks suppressed
	[Jun17 11:11] systemd-fstab-generator[3703]: Ignoring "noauto" option for root device
	[  +0.153522] systemd-fstab-generator[3715]: Ignoring "noauto" option for root device
	[  +0.191265] systemd-fstab-generator[3729]: Ignoring "noauto" option for root device
	[  +0.173329] systemd-fstab-generator[3741]: Ignoring "noauto" option for root device
	[  +0.291105] systemd-fstab-generator[3769]: Ignoring "noauto" option for root device
	[  +2.176722] systemd-fstab-generator[3870]: Ignoring "noauto" option for root device
	[  +5.948906] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.359694] kauditd_printk_skb: 85 callbacks suppressed
	[Jun17 11:12] kauditd_printk_skb: 6 callbacks suppressed
	[  +9.069097] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [4c9f362fab2deb3901ab9bb43f8da39d89a6b6ff1f7413040ba94079dba2f359] <==
	{"level":"warn","ts":"2024-06-17T11:12:48.866139Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.168:2380/version","remote-member-id":"426ee77ba77ea10d","error":"Get \"https://192.168.39.168:2380/version\": dial tcp 192.168.39.168:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-17T11:12:48.866207Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"426ee77ba77ea10d","error":"Get \"https://192.168.39.168:2380/version\": dial tcp 192.168.39.168:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-17T11:12:51.063992Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"426ee77ba77ea10d","rtt":"0s","error":"dial tcp 192.168.39.168:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-17T11:12:51.064101Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"426ee77ba77ea10d","rtt":"0s","error":"dial tcp 192.168.39.168:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-17T11:12:52.867815Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.168:2380/version","remote-member-id":"426ee77ba77ea10d","error":"Get \"https://192.168.39.168:2380/version\": dial tcp 192.168.39.168:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-17T11:12:52.86799Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"426ee77ba77ea10d","error":"Get \"https://192.168.39.168:2380/version\": dial tcp 192.168.39.168:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-17T11:12:56.065169Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"426ee77ba77ea10d","rtt":"0s","error":"dial tcp 192.168.39.168:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-17T11:12:56.065179Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"426ee77ba77ea10d","rtt":"0s","error":"dial tcp 192.168.39.168:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-17T11:12:56.86949Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.168:2380/version","remote-member-id":"426ee77ba77ea10d","error":"Get \"https://192.168.39.168:2380/version\": dial tcp 192.168.39.168:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-17T11:12:56.869548Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"426ee77ba77ea10d","error":"Get \"https://192.168.39.168:2380/version\": dial tcp 192.168.39.168:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-17T11:13:00.871207Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.168:2380/version","remote-member-id":"426ee77ba77ea10d","error":"Get \"https://192.168.39.168:2380/version\": dial tcp 192.168.39.168:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-17T11:13:00.871327Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"426ee77ba77ea10d","error":"Get \"https://192.168.39.168:2380/version\": dial tcp 192.168.39.168:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-17T11:13:01.065421Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"426ee77ba77ea10d","rtt":"0s","error":"dial tcp 192.168.39.168:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-17T11:13:01.065549Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"426ee77ba77ea10d","rtt":"0s","error":"dial tcp 192.168.39.168:2380: connect: connection refused"}
	{"level":"info","ts":"2024-06-17T11:13:01.876035Z","caller":"traceutil/trace.go:171","msg":"trace[313504688] transaction","detail":"{read_only:false; response_revision:2273; number_of_response:1; }","duration":"105.798116ms","start":"2024-06-17T11:13:01.770206Z","end":"2024-06-17T11:13:01.876004Z","steps":["trace[313504688] 'process raft request'  (duration: 105.578985ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T11:13:02.651376Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:13:02.676494Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:13:02.677341Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"52887eb9b9b3603c","to":"426ee77ba77ea10d","stream-type":"stream Message"}
	{"level":"info","ts":"2024-06-17T11:13:02.677459Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:13:02.678649Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"52887eb9b9b3603c","to":"426ee77ba77ea10d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-06-17T11:13:02.678693Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:13:02.681653Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"warn","ts":"2024-06-17T11:13:02.688773Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.168:33224","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-06-17T11:13:05.610811Z","caller":"traceutil/trace.go:171","msg":"trace[1782809441] transaction","detail":"{read_only:false; response_revision:2289; number_of_response:1; }","duration":"167.23287ms","start":"2024-06-17T11:13:05.443565Z","end":"2024-06-17T11:13:05.610798Z","steps":["trace[1782809441] 'process raft request'  (duration: 165.20968ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T11:13:07.842053Z","caller":"traceutil/trace.go:171","msg":"trace[395437298] transaction","detail":"{read_only:false; response_revision:2308; number_of_response:1; }","duration":"118.759887ms","start":"2024-06-17T11:13:07.723274Z","end":"2024-06-17T11:13:07.842034Z","steps":["trace[395437298] 'process raft request'  (duration: 115.11517ms)"],"step_count":1}
	
	
	==> etcd [ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad] <==
	{"level":"warn","ts":"2024-06-17T11:09:49.251717Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-17T11:09:48.548483Z","time spent":"703.231531ms","remote":"127.0.0.1:45142","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":0,"response size":0,"request content":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" limit:10000 "}
	2024/06/17 11:09:49 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-17T11:09:49.251732Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-17T11:09:48.557516Z","time spent":"694.208167ms","remote":"127.0.0.1:52168","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:10000 "}
	2024/06/17 11:09:49 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-17T11:09:49.251902Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-17T11:09:48.375101Z","time spent":"876.279984ms","remote":"127.0.0.1:52074","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:500 "}
	2024/06/17 11:09:49 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-06-17T11:09:49.311373Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"52887eb9b9b3603c","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-06-17T11:09:49.311593Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8bdd85bfd034bac1"}
	{"level":"info","ts":"2024-06-17T11:09:49.31163Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8bdd85bfd034bac1"}
	{"level":"info","ts":"2024-06-17T11:09:49.311652Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8bdd85bfd034bac1"}
	{"level":"info","ts":"2024-06-17T11:09:49.311755Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1"}
	{"level":"info","ts":"2024-06-17T11:09:49.311802Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1"}
	{"level":"info","ts":"2024-06-17T11:09:49.311832Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1"}
	{"level":"info","ts":"2024-06-17T11:09:49.311917Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8bdd85bfd034bac1"}
	{"level":"info","ts":"2024-06-17T11:09:49.311924Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:09:49.311938Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:09:49.311972Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:09:49.31204Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:09:49.312064Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:09:49.312117Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:09:49.312144Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:09:49.314751Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.134:2380"}
	{"level":"info","ts":"2024-06-17T11:09:49.314914Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.134:2380"}
	{"level":"info","ts":"2024-06-17T11:09:49.314964Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-064080","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.134:2380"],"advertise-client-urls":["https://192.168.39.134:2379"]}
	
	
	==> kernel <==
	 11:13:46 up 13 min,  0 users,  load average: 0.31, 0.42, 0.28
	Linux ha-064080 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4af9cf344f6b524475b47fa29673012301a355ef88398883d01606aee8cc859c] <==
	I0617 11:11:30.894804       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0617 11:11:30.903031       1 main.go:107] hostIP = 192.168.39.134
	podIP = 192.168.39.134
	I0617 11:11:30.903217       1 main.go:116] setting mtu 1500 for CNI 
	I0617 11:11:30.903261       1 main.go:146] kindnetd IP family: "ipv4"
	I0617 11:11:30.903423       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0617 11:11:34.095493       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0617 11:11:37.167418       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0617 11:11:48.169199       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0617 11:11:52.527327       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.251:46758->10.96.0.1:443: read: connection reset by peer
	I0617 11:11:55.600628       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kindnet [7e968a7b99037fcd74cf96493f10b9e4b77571018045daa12bfa9faff24036da] <==
	I0617 11:13:11.998710       1 main.go:250] Node ha-064080-m04 has CIDR [10.244.3.0/24] 
	I0617 11:13:22.017205       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0617 11:13:22.017258       1 main.go:227] handling current node
	I0617 11:13:22.017282       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0617 11:13:22.017289       1 main.go:250] Node ha-064080-m02 has CIDR [10.244.1.0/24] 
	I0617 11:13:22.017439       1 main.go:223] Handling node with IPs: map[192.168.39.168:{}]
	I0617 11:13:22.017466       1 main.go:250] Node ha-064080-m03 has CIDR [10.244.2.0/24] 
	I0617 11:13:22.017547       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I0617 11:13:22.017551       1 main.go:250] Node ha-064080-m04 has CIDR [10.244.3.0/24] 
	I0617 11:13:32.031582       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0617 11:13:32.031632       1 main.go:227] handling current node
	I0617 11:13:32.031646       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0617 11:13:32.031651       1 main.go:250] Node ha-064080-m02 has CIDR [10.244.1.0/24] 
	I0617 11:13:32.031797       1 main.go:223] Handling node with IPs: map[192.168.39.168:{}]
	I0617 11:13:32.031802       1 main.go:250] Node ha-064080-m03 has CIDR [10.244.2.0/24] 
	I0617 11:13:32.031897       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I0617 11:13:32.031933       1 main.go:250] Node ha-064080-m04 has CIDR [10.244.3.0/24] 
	I0617 11:13:42.042277       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0617 11:13:42.042321       1 main.go:227] handling current node
	I0617 11:13:42.042341       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0617 11:13:42.042346       1 main.go:250] Node ha-064080-m02 has CIDR [10.244.1.0/24] 
	I0617 11:13:42.042461       1 main.go:223] Handling node with IPs: map[192.168.39.168:{}]
	I0617 11:13:42.042485       1 main.go:250] Node ha-064080-m03 has CIDR [10.244.2.0/24] 
	I0617 11:13:42.042555       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I0617 11:13:42.042579       1 main.go:250] Node ha-064080-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7e9062f80f59bb01cd3d133ee66a6cf66b83b310d47589d9e9eeb07982548f74] <==
	I0617 11:11:30.407093       1 options.go:221] external host was not specified, using 192.168.39.134
	I0617 11:11:30.409344       1 server.go:148] Version: v1.30.1
	I0617 11:11:30.409407       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:11:30.921365       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0617 11:11:30.921600       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0617 11:11:30.921979       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0617 11:11:30.922029       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0617 11:11:30.922252       1 instance.go:299] Using reconciler: lease
	W0617 11:11:50.916587       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0617 11:11:50.916662       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0617 11:11:50.923581       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [ea168e43c8f58b627333f8db1fcab727151d0170538dd365a0ff2c14a670bc63] <==
	I0617 11:12:14.009105       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0617 11:12:14.009221       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0617 11:12:14.087260       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0617 11:12:14.087731       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0617 11:12:14.088176       1 shared_informer.go:320] Caches are synced for configmaps
	I0617 11:12:14.090302       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0617 11:12:14.090335       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0617 11:12:14.090514       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0617 11:12:14.097289       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0617 11:12:14.099219       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.104 192.168.39.168]
	I0617 11:12:14.105703       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0617 11:12:14.105823       1 aggregator.go:165] initial CRD sync complete...
	I0617 11:12:14.105945       1 autoregister_controller.go:141] Starting autoregister controller
	I0617 11:12:14.105978       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0617 11:12:14.106008       1 cache.go:39] Caches are synced for autoregister controller
	I0617 11:12:14.126955       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0617 11:12:14.133582       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0617 11:12:14.133636       1 policy_source.go:224] refreshing policies
	I0617 11:12:14.180377       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0617 11:12:14.202563       1 controller.go:615] quota admission added evaluator for: endpoints
	I0617 11:12:14.222752       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0617 11:12:14.228680       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0617 11:12:14.993917       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0617 11:12:15.447994       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.104 192.168.39.134 192.168.39.168]
	W0617 11:12:25.447760       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.104 192.168.39.134]
	
	
	==> kube-controller-manager [9a32b0b77a472f149702c6af5025c8bce824feadd95de75493b9a7c7da94010a] <==
	I0617 11:11:31.314387       1 serving.go:380] Generated self-signed cert in-memory
	I0617 11:11:31.942563       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0617 11:11:31.942650       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:11:31.944246       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0617 11:11:31.944977       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0617 11:11:31.945068       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0617 11:11:31.945192       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0617 11:11:51.947407       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.134:8443/healthz\": dial tcp 192.168.39.134:8443: connect: connection refused"
	
	
	==> kube-controller-manager [9cca27b47119ee9b81f6755dc162135ff2de0238a503b8d7d8cd565cc8ddcaa9] <==
	I0617 11:12:27.054957       1 shared_informer.go:320] Caches are synced for stateful set
	I0617 11:12:27.076069       1 shared_informer.go:320] Caches are synced for resource quota
	I0617 11:12:27.079067       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0617 11:12:27.083727       1 shared_informer.go:320] Caches are synced for daemon sets
	I0617 11:12:27.090447       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0617 11:12:27.090489       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0617 11:12:27.091055       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0617 11:12:27.092249       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0617 11:12:27.131949       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0617 11:12:27.560393       1 shared_informer.go:320] Caches are synced for garbage collector
	I0617 11:12:27.622676       1 shared_informer.go:320] Caches are synced for garbage collector
	I0617 11:12:27.622718       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0617 11:12:29.157277       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-4zh94 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-4zh94\": the object has been modified; please apply your changes to the latest version and try again"
	I0617 11:12:29.157509       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"b05cc1a1-9a1f-4ee1-8c70-1418f5a2620d", APIVersion:"v1", ResourceVersion:"240", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-4zh94 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-4zh94": the object has been modified; please apply your changes to the latest version and try again
	I0617 11:12:29.180783       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.624073ms"
	I0617 11:12:29.181204       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="99.912µs"
	I0617 11:12:30.995408       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.116625ms"
	I0617 11:12:30.995624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.37µs"
	I0617 11:12:39.128658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.972148ms"
	I0617 11:12:39.129810       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.414µs"
	I0617 11:12:51.320323       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.069413ms"
	I0617 11:12:51.320615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.566µs"
	I0617 11:13:10.780022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.274459ms"
	I0617 11:13:10.780109       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.448µs"
	I0617 11:13:37.853600       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-064080-m04"
	
	
	==> kube-proxy [8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb] <==
	E0617 11:08:44.369915       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1708": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:08:47.441142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1826": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:08:47.441253       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1826": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:08:47.441335       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&resourceVersion=1804": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:08:47.441356       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&resourceVersion=1804": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:08:47.441775       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1708": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:08:47.441900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1708": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:08:53.585024       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1708": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:08:53.585220       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1708": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:08:53.585205       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&resourceVersion=1804": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:08:53.585277       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&resourceVersion=1804": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:08:53.585400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1826": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:08:53.585515       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1826": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:09:05.872591       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&resourceVersion=1804": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:09:05.872705       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&resourceVersion=1804": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:09:05.872804       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1708": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:09:05.872889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1708": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:09:05.872999       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1826": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:09:05.873040       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1826": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:09:27.376721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1826": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:09:27.377048       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1826": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:09:30.448549       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&resourceVersion=1804": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:09:30.448641       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&resourceVersion=1804": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:09:30.448736       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1708": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:09:30.448779       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1708": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [acee1942b213b3a339a1c92af2a70571f5c7f4b96158320c3bb8f3f74d86a0b2] <==
	E0617 11:11:54.832687       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-064080\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0617 11:12:13.264516       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-064080\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0617 11:12:13.264677       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0617 11:12:13.343459       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0617 11:12:13.343563       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0617 11:12:13.343581       1 server_linux.go:165] "Using iptables Proxier"
	I0617 11:12:13.352597       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 11:12:13.352903       1 server.go:872] "Version info" version="v1.30.1"
	I0617 11:12:13.352950       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:12:13.354659       1 config.go:192] "Starting service config controller"
	I0617 11:12:13.356632       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 11:12:13.355998       1 config.go:319] "Starting node config controller"
	I0617 11:12:13.358024       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 11:12:13.355436       1 config.go:101] "Starting endpoint slice config controller"
	I0617 11:12:13.358053       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	W0617 11:12:16.338056       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:12:16.340474       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:12:16.340797       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:12:16.341042       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:12:16.341274       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:12:16.341470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:12:16.341786       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0617 11:12:17.258296       1 shared_informer.go:320] Caches are synced for node config
	I0617 11:12:17.458404       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0617 11:12:17.858661       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328] <==
	W0617 11:09:44.146274       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0617 11:09:44.146367       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 11:09:44.222545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0617 11:09:44.222678       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0617 11:09:45.975271       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0617 11:09:45.975375       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0617 11:09:46.352357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0617 11:09:46.352453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0617 11:09:46.505159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0617 11:09:46.505361       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0617 11:09:46.813113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0617 11:09:46.813204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0617 11:09:47.525074       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0617 11:09:47.525162       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0617 11:09:47.582519       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0617 11:09:47.582614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0617 11:09:47.634102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0617 11:09:47.634242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0617 11:09:48.231623       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0617 11:09:48.231716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0617 11:09:48.705640       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0617 11:09:48.705673       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0617 11:09:49.026601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0617 11:09:49.026689       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0617 11:09:49.208441       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [88dbcac1da73105615cd555b19ec3b51e43dc6fd5ee233f83d19dcaa41a1b5ee] <==
	W0617 11:12:09.625311       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.134:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0617 11:12:09.625418       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.134:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	W0617 11:12:09.643998       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.134:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0617 11:12:09.644047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.134:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	W0617 11:12:09.863627       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.134:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0617 11:12:09.863671       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.134:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	W0617 11:12:10.317381       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.134:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0617 11:12:10.317483       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.134:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	W0617 11:12:10.514624       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.134:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0617 11:12:10.514718       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.134:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	W0617 11:12:10.567621       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.134:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0617 11:12:10.567707       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.134:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	W0617 11:12:10.918375       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.134:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0617 11:12:10.918516       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.134:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	W0617 11:12:11.501263       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.134:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0617 11:12:11.501355       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.134:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	W0617 11:12:11.793682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.134:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0617 11:12:11.793784       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.134:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	W0617 11:12:11.894618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.134:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0617 11:12:11.894718       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.134:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	W0617 11:12:14.010753       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0617 11:12:14.010803       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0617 11:12:14.010935       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0617 11:12:14.010975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0617 11:12:26.235916       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 17 11:12:16 ha-064080 kubelet[1371]: E0617 11:12:16.335362    1371 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-064080\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-064080?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jun 17 11:12:16 ha-064080 kubelet[1371]: I0617 11:12:16.335789    1371 status_manager.go:853] "Failed to get status for pod" podUID="c2453fd4-894d-4212-bc48-1803e28ddba8" pod="kube-system/coredns-7db6d8ff4d-zv99k" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zv99k\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jun 17 11:12:24 ha-064080 kubelet[1371]: I0617 11:12:24.100155    1371 scope.go:117] "RemoveContainer" containerID="5831ea6ee0c390e7ce915655860ab50d35ab3dd5fecf6fafbe17b03a4020ba0a"
	Jun 17 11:12:24 ha-064080 kubelet[1371]: E0617 11:12:24.100330    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5646fca8-9ebc-47c1-b5ff-c87b0ed800d8)\"" pod="kube-system/storage-provisioner" podUID="5646fca8-9ebc-47c1-b5ff-c87b0ed800d8"
	Jun 17 11:12:24 ha-064080 kubelet[1371]: I0617 11:12:24.100564    1371 scope.go:117] "RemoveContainer" containerID="4af9cf344f6b524475b47fa29673012301a355ef88398883d01606aee8cc859c"
	Jun 17 11:12:24 ha-064080 kubelet[1371]: E0617 11:12:24.100748    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-48mb7_kube-system(67422049-6637-4ca3-8bd1-2b47a265829d)\"" pod="kube-system/kindnet-48mb7" podUID="67422049-6637-4ca3-8bd1-2b47a265829d"
	Jun 17 11:12:32 ha-064080 kubelet[1371]: E0617 11:12:32.162560    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 11:12:32 ha-064080 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 11:12:32 ha-064080 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 11:12:32 ha-064080 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:12:32 ha-064080 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 11:12:36 ha-064080 kubelet[1371]: I0617 11:12:36.098480    1371 scope.go:117] "RemoveContainer" containerID="5831ea6ee0c390e7ce915655860ab50d35ab3dd5fecf6fafbe17b03a4020ba0a"
	Jun 17 11:12:36 ha-064080 kubelet[1371]: E0617 11:12:36.099188    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5646fca8-9ebc-47c1-b5ff-c87b0ed800d8)\"" pod="kube-system/storage-provisioner" podUID="5646fca8-9ebc-47c1-b5ff-c87b0ed800d8"
	Jun 17 11:12:39 ha-064080 kubelet[1371]: I0617 11:12:39.098298    1371 scope.go:117] "RemoveContainer" containerID="4af9cf344f6b524475b47fa29673012301a355ef88398883d01606aee8cc859c"
	Jun 17 11:12:39 ha-064080 kubelet[1371]: E0617 11:12:39.098619    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-48mb7_kube-system(67422049-6637-4ca3-8bd1-2b47a265829d)\"" pod="kube-system/kindnet-48mb7" podUID="67422049-6637-4ca3-8bd1-2b47a265829d"
	Jun 17 11:12:48 ha-064080 kubelet[1371]: I0617 11:12:48.099136    1371 scope.go:117] "RemoveContainer" containerID="5831ea6ee0c390e7ce915655860ab50d35ab3dd5fecf6fafbe17b03a4020ba0a"
	Jun 17 11:12:51 ha-064080 kubelet[1371]: I0617 11:12:51.098806    1371 scope.go:117] "RemoveContainer" containerID="4af9cf344f6b524475b47fa29673012301a355ef88398883d01606aee8cc859c"
	Jun 17 11:12:52 ha-064080 kubelet[1371]: I0617 11:12:52.099313    1371 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-064080" podUID="6b9259b1-ee46-4493-ba10-dcb32da03f57"
	Jun 17 11:12:52 ha-064080 kubelet[1371]: I0617 11:12:52.139831    1371 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-064080"
	Jun 17 11:13:02 ha-064080 kubelet[1371]: I0617 11:13:02.126031    1371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-064080" podStartSLOduration=10.12600127 podStartE2EDuration="10.12600127s" podCreationTimestamp="2024-06-17 11:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-17 11:13:02.125557036 +0000 UTC m=+750.189617414" watchObservedRunningTime="2024-06-17 11:13:02.12600127 +0000 UTC m=+750.190061644"
	Jun 17 11:13:32 ha-064080 kubelet[1371]: E0617 11:13:32.161528    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 11:13:32 ha-064080 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 11:13:32 ha-064080 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 11:13:32 ha-064080 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:13:32 ha-064080 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:13:45.697535  138149 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19084-112967/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-064080 -n ha-064080
helpers_test.go:261: (dbg) Run:  kubectl --context ha-064080 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (361.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-064080 stop -v=7 --alsologtostderr: exit status 82 (2m0.470323951s)

                                                
                                                
-- stdout --
	* Stopping node "ha-064080-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:14:05.476030  138557 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:14:05.476513  138557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:14:05.476572  138557 out.go:304] Setting ErrFile to fd 2...
	I0617 11:14:05.476589  138557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:14:05.477082  138557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:14:05.477515  138557 out.go:298] Setting JSON to false
	I0617 11:14:05.477588  138557 mustload.go:65] Loading cluster: ha-064080
	I0617 11:14:05.477938  138557 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:14:05.478053  138557 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 11:14:05.478228  138557 mustload.go:65] Loading cluster: ha-064080
	I0617 11:14:05.478351  138557 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:14:05.478372  138557 stop.go:39] StopHost: ha-064080-m04
	I0617 11:14:05.478744  138557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:14:05.478783  138557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:14:05.496006  138557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45313
	I0617 11:14:05.496588  138557 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:14:05.497175  138557 main.go:141] libmachine: Using API Version  1
	I0617 11:14:05.497202  138557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:14:05.497550  138557 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:14:05.500691  138557 out.go:177] * Stopping node "ha-064080-m04"  ...
	I0617 11:14:05.501932  138557 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0617 11:14:05.501970  138557 main.go:141] libmachine: (ha-064080-m04) Calling .DriverName
	I0617 11:14:05.502209  138557 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0617 11:14:05.502238  138557 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHHostname
	I0617 11:14:05.505350  138557 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:14:05.505730  138557 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:13:31 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:14:05.505765  138557 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:14:05.505906  138557 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHPort
	I0617 11:14:05.506102  138557 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHKeyPath
	I0617 11:14:05.506282  138557 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHUsername
	I0617 11:14:05.506436  138557 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m04/id_rsa Username:docker}
	I0617 11:14:05.594919  138557 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0617 11:14:05.646916  138557 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0617 11:14:05.698382  138557 main.go:141] libmachine: Stopping "ha-064080-m04"...
	I0617 11:14:05.698418  138557 main.go:141] libmachine: (ha-064080-m04) Calling .GetState
	I0617 11:14:05.700082  138557 main.go:141] libmachine: (ha-064080-m04) Calling .Stop
	I0617 11:14:05.703640  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 0/120
	I0617 11:14:06.705854  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 1/120
	I0617 11:14:07.707726  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 2/120
	I0617 11:14:08.709095  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 3/120
	I0617 11:14:09.710709  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 4/120
	I0617 11:14:10.712139  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 5/120
	I0617 11:14:11.714266  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 6/120
	I0617 11:14:12.715612  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 7/120
	I0617 11:14:13.717999  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 8/120
	I0617 11:14:14.719296  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 9/120
	I0617 11:14:15.720758  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 10/120
	I0617 11:14:16.722916  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 11/120
	I0617 11:14:17.725170  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 12/120
	I0617 11:14:18.727408  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 13/120
	I0617 11:14:19.729100  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 14/120
	I0617 11:14:20.730532  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 15/120
	I0617 11:14:21.731834  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 16/120
	I0617 11:14:22.733124  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 17/120
	I0617 11:14:23.734427  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 18/120
	I0617 11:14:24.735636  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 19/120
	I0617 11:14:25.737485  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 20/120
	I0617 11:14:26.738714  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 21/120
	I0617 11:14:27.739877  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 22/120
	I0617 11:14:28.741238  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 23/120
	I0617 11:14:29.742503  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 24/120
	I0617 11:14:30.743999  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 25/120
	I0617 11:14:31.745806  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 26/120
	I0617 11:14:32.747032  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 27/120
	I0617 11:14:33.748649  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 28/120
	I0617 11:14:34.750187  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 29/120
	I0617 11:14:35.752319  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 30/120
	I0617 11:14:36.753594  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 31/120
	I0617 11:14:37.754803  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 32/120
	I0617 11:14:38.756172  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 33/120
	I0617 11:14:39.757408  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 34/120
	I0617 11:14:40.759631  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 35/120
	I0617 11:14:41.761865  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 36/120
	I0617 11:14:42.763101  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 37/120
	I0617 11:14:43.764585  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 38/120
	I0617 11:14:44.766332  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 39/120
	I0617 11:14:45.768516  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 40/120
	I0617 11:14:46.769866  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 41/120
	I0617 11:14:47.771682  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 42/120
	I0617 11:14:48.773963  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 43/120
	I0617 11:14:49.775296  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 44/120
	I0617 11:14:50.776762  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 45/120
	I0617 11:14:51.778075  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 46/120
	I0617 11:14:52.779412  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 47/120
	I0617 11:14:53.780831  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 48/120
	I0617 11:14:54.782626  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 49/120
	I0617 11:14:55.784910  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 50/120
	I0617 11:14:56.786503  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 51/120
	I0617 11:14:57.788532  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 52/120
	I0617 11:14:58.790066  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 53/120
	I0617 11:14:59.791706  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 54/120
	I0617 11:15:00.793626  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 55/120
	I0617 11:15:01.795009  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 56/120
	I0617 11:15:02.796387  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 57/120
	I0617 11:15:03.797885  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 58/120
	I0617 11:15:04.799480  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 59/120
	I0617 11:15:05.801398  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 60/120
	I0617 11:15:06.802679  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 61/120
	I0617 11:15:07.804117  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 62/120
	I0617 11:15:08.805971  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 63/120
	I0617 11:15:09.807170  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 64/120
	I0617 11:15:10.808624  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 65/120
	I0617 11:15:11.809897  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 66/120
	I0617 11:15:12.811218  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 67/120
	I0617 11:15:13.812730  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 68/120
	I0617 11:15:14.813912  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 69/120
	I0617 11:15:15.816195  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 70/120
	I0617 11:15:16.818597  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 71/120
	I0617 11:15:17.820168  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 72/120
	I0617 11:15:18.821519  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 73/120
	I0617 11:15:19.822956  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 74/120
	I0617 11:15:20.824707  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 75/120
	I0617 11:15:21.826021  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 76/120
	I0617 11:15:22.827547  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 77/120
	I0617 11:15:23.828848  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 78/120
	I0617 11:15:24.830282  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 79/120
	I0617 11:15:25.832632  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 80/120
	I0617 11:15:26.833994  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 81/120
	I0617 11:15:27.835509  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 82/120
	I0617 11:15:28.836720  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 83/120
	I0617 11:15:29.838044  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 84/120
	I0617 11:15:30.839866  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 85/120
	I0617 11:15:31.841118  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 86/120
	I0617 11:15:32.842237  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 87/120
	I0617 11:15:33.843659  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 88/120
	I0617 11:15:34.846201  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 89/120
	I0617 11:15:35.848335  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 90/120
	I0617 11:15:36.849819  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 91/120
	I0617 11:15:37.851026  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 92/120
	I0617 11:15:38.852250  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 93/120
	I0617 11:15:39.853473  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 94/120
	I0617 11:15:40.855964  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 95/120
	I0617 11:15:41.857984  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 96/120
	I0617 11:15:42.860282  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 97/120
	I0617 11:15:43.861638  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 98/120
	I0617 11:15:44.863324  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 99/120
	I0617 11:15:45.865793  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 100/120
	I0617 11:15:46.867138  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 101/120
	I0617 11:15:47.868512  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 102/120
	I0617 11:15:48.869933  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 103/120
	I0617 11:15:49.871591  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 104/120
	I0617 11:15:50.874021  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 105/120
	I0617 11:15:51.875656  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 106/120
	I0617 11:15:52.876982  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 107/120
	I0617 11:15:53.878563  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 108/120
	I0617 11:15:54.879999  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 109/120
	I0617 11:15:55.882110  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 110/120
	I0617 11:15:56.883442  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 111/120
	I0617 11:15:57.884718  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 112/120
	I0617 11:15:58.886131  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 113/120
	I0617 11:15:59.887348  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 114/120
	I0617 11:16:00.889170  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 115/120
	I0617 11:16:01.890612  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 116/120
	I0617 11:16:02.892382  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 117/120
	I0617 11:16:03.893674  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 118/120
	I0617 11:16:04.895108  138557 main.go:141] libmachine: (ha-064080-m04) Waiting for machine to stop 119/120
	I0617 11:16:05.895853  138557 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0617 11:16:05.895940  138557 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0617 11:16:05.898200  138557 out.go:177] 
	W0617 11:16:05.899580  138557 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0617 11:16:05.899600  138557 out.go:239] * 
	* 
	W0617 11:16:05.901922  138557 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 11:16:05.903108  138557 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-064080 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr: exit status 3 (18.870858579s)

                                                
                                                
-- stdout --
	ha-064080
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064080-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:16:05.950155  138985 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:16:05.950266  138985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:16:05.950275  138985 out.go:304] Setting ErrFile to fd 2...
	I0617 11:16:05.950278  138985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:16:05.950453  138985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:16:05.950621  138985 out.go:298] Setting JSON to false
	I0617 11:16:05.950643  138985 mustload.go:65] Loading cluster: ha-064080
	I0617 11:16:05.950753  138985 notify.go:220] Checking for updates...
	I0617 11:16:05.951596  138985 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:16:05.951629  138985 status.go:255] checking status of ha-064080 ...
	I0617 11:16:05.952416  138985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:16:05.952488  138985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:16:05.975056  138985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46659
	I0617 11:16:05.975415  138985 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:16:05.975992  138985 main.go:141] libmachine: Using API Version  1
	I0617 11:16:05.976014  138985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:16:05.976374  138985 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:16:05.976595  138985 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:16:05.978119  138985 status.go:330] ha-064080 host status = "Running" (err=<nil>)
	I0617 11:16:05.978137  138985 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:16:05.978458  138985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:16:05.978505  138985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:16:05.993899  138985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38097
	I0617 11:16:05.994307  138985 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:16:05.994804  138985 main.go:141] libmachine: Using API Version  1
	I0617 11:16:05.994844  138985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:16:05.995213  138985 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:16:05.995437  138985 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:16:05.998056  138985 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:16:05.998487  138985 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:16:05.998510  138985 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:16:05.998635  138985 host.go:66] Checking if "ha-064080" exists ...
	I0617 11:16:05.998947  138985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:16:05.998983  138985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:16:06.012790  138985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35251
	I0617 11:16:06.013210  138985 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:16:06.013737  138985 main.go:141] libmachine: Using API Version  1
	I0617 11:16:06.013757  138985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:16:06.014028  138985 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:16:06.014215  138985 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:16:06.014388  138985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:16:06.014415  138985 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:16:06.017068  138985 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:16:06.017557  138985 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:16:06.017584  138985 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:16:06.017709  138985 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:16:06.017874  138985 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:16:06.018009  138985 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:16:06.018176  138985 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:16:06.103375  138985 ssh_runner.go:195] Run: systemctl --version
	I0617 11:16:06.111282  138985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:16:06.126515  138985 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:16:06.126544  138985 api_server.go:166] Checking apiserver status ...
	I0617 11:16:06.126592  138985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:16:06.142922  138985 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5085/cgroup
	W0617 11:16:06.154381  138985 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5085/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:16:06.154435  138985 ssh_runner.go:195] Run: ls
	I0617 11:16:06.159254  138985 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:16:06.165756  138985 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:16:06.165778  138985 status.go:422] ha-064080 apiserver status = Running (err=<nil>)
	I0617 11:16:06.165790  138985 status.go:257] ha-064080 status: &{Name:ha-064080 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:16:06.165811  138985 status.go:255] checking status of ha-064080-m02 ...
	I0617 11:16:06.166095  138985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:16:06.166138  138985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:16:06.180911  138985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0617 11:16:06.181348  138985 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:16:06.181827  138985 main.go:141] libmachine: Using API Version  1
	I0617 11:16:06.181851  138985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:16:06.182200  138985 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:16:06.182404  138985 main.go:141] libmachine: (ha-064080-m02) Calling .GetState
	I0617 11:16:06.183940  138985 status.go:330] ha-064080-m02 host status = "Running" (err=<nil>)
	I0617 11:16:06.183957  138985 host.go:66] Checking if "ha-064080-m02" exists ...
	I0617 11:16:06.184279  138985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:16:06.184319  138985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:16:06.198350  138985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41703
	I0617 11:16:06.198727  138985 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:16:06.199173  138985 main.go:141] libmachine: Using API Version  1
	I0617 11:16:06.199190  138985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:16:06.199487  138985 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:16:06.199682  138985 main.go:141] libmachine: (ha-064080-m02) Calling .GetIP
	I0617 11:16:06.202673  138985 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:16:06.203061  138985 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:11:35 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:16:06.203086  138985 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:16:06.203248  138985 host.go:66] Checking if "ha-064080-m02" exists ...
	I0617 11:16:06.203581  138985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:16:06.203637  138985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:16:06.217596  138985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I0617 11:16:06.217970  138985 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:16:06.218367  138985 main.go:141] libmachine: Using API Version  1
	I0617 11:16:06.218396  138985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:16:06.218717  138985 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:16:06.218888  138985 main.go:141] libmachine: (ha-064080-m02) Calling .DriverName
	I0617 11:16:06.219050  138985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:16:06.219066  138985 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHHostname
	I0617 11:16:06.221672  138985 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:16:06.222097  138985 main.go:141] libmachine: (ha-064080-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:79:30", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:11:35 +0000 UTC Type:0 Mac:52:54:00:75:79:30 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-064080-m02 Clientid:01:52:54:00:75:79:30}
	I0617 11:16:06.222126  138985 main.go:141] libmachine: (ha-064080-m02) DBG | domain ha-064080-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:75:79:30 in network mk-ha-064080
	I0617 11:16:06.222253  138985 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHPort
	I0617 11:16:06.222410  138985 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHKeyPath
	I0617 11:16:06.222566  138985 main.go:141] libmachine: (ha-064080-m02) Calling .GetSSHUsername
	I0617 11:16:06.222707  138985 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m02/id_rsa Username:docker}
	I0617 11:16:06.304617  138985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:16:06.325491  138985 kubeconfig.go:125] found "ha-064080" server: "https://192.168.39.254:8443"
	I0617 11:16:06.325517  138985 api_server.go:166] Checking apiserver status ...
	I0617 11:16:06.325550  138985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:16:06.342281  138985 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1380/cgroup
	W0617 11:16:06.351228  138985 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1380/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:16:06.351282  138985 ssh_runner.go:195] Run: ls
	I0617 11:16:06.356185  138985 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0617 11:16:06.360688  138985 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0617 11:16:06.360708  138985 status.go:422] ha-064080-m02 apiserver status = Running (err=<nil>)
	I0617 11:16:06.360716  138985 status.go:257] ha-064080-m02 status: &{Name:ha-064080-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:16:06.360747  138985 status.go:255] checking status of ha-064080-m04 ...
	I0617 11:16:06.361123  138985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:16:06.361159  138985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:16:06.375909  138985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0617 11:16:06.376404  138985 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:16:06.377039  138985 main.go:141] libmachine: Using API Version  1
	I0617 11:16:06.377056  138985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:16:06.377410  138985 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:16:06.377630  138985 main.go:141] libmachine: (ha-064080-m04) Calling .GetState
	I0617 11:16:06.379264  138985 status.go:330] ha-064080-m04 host status = "Running" (err=<nil>)
	I0617 11:16:06.379282  138985 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:16:06.379617  138985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:16:06.379676  138985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:16:06.394178  138985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39441
	I0617 11:16:06.394581  138985 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:16:06.395006  138985 main.go:141] libmachine: Using API Version  1
	I0617 11:16:06.395026  138985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:16:06.395350  138985 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:16:06.395521  138985 main.go:141] libmachine: (ha-064080-m04) Calling .GetIP
	I0617 11:16:06.398206  138985 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:16:06.398730  138985 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:13:31 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:16:06.398760  138985 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:16:06.398886  138985 host.go:66] Checking if "ha-064080-m04" exists ...
	I0617 11:16:06.399281  138985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:16:06.399321  138985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:16:06.413434  138985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35387
	I0617 11:16:06.413790  138985 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:16:06.414171  138985 main.go:141] libmachine: Using API Version  1
	I0617 11:16:06.414191  138985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:16:06.414480  138985 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:16:06.414710  138985 main.go:141] libmachine: (ha-064080-m04) Calling .DriverName
	I0617 11:16:06.414893  138985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:16:06.414911  138985 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHHostname
	I0617 11:16:06.417269  138985 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:16:06.417610  138985 main.go:141] libmachine: (ha-064080-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:60:46", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:13:31 +0000 UTC Type:0 Mac:52:54:00:51:60:46 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-064080-m04 Clientid:01:52:54:00:51:60:46}
	I0617 11:16:06.417647  138985 main.go:141] libmachine: (ha-064080-m04) DBG | domain ha-064080-m04 has defined IP address 192.168.39.167 and MAC address 52:54:00:51:60:46 in network mk-ha-064080
	I0617 11:16:06.417790  138985 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHPort
	I0617 11:16:06.417931  138985 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHKeyPath
	I0617 11:16:06.418028  138985 main.go:141] libmachine: (ha-064080-m04) Calling .GetSSHUsername
	I0617 11:16:06.418118  138985 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080-m04/id_rsa Username:docker}
	W0617 11:16:24.775688  138985 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.167:22: connect: no route to host
	W0617 11:16:24.775824  138985 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	E0617 11:16:24.775846  138985 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	I0617 11:16:24.775858  138985 status.go:257] ha-064080-m04 status: &{Name:ha-064080-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0617 11:16:24.775889  138985 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-064080 -n ha-064080
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-064080 logs -n 25: (1.645963422s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-064080 ssh -n ha-064080-m02 sudo cat                                          | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m03_ha-064080-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m03:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04:/home/docker/cp-test_ha-064080-m03_ha-064080-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080-m04 sudo cat                                          | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m03_ha-064080-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-064080 cp testdata/cp-test.txt                                                | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4010822866/001/cp-test_ha-064080-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080:/home/docker/cp-test_ha-064080-m04_ha-064080.txt                       |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080 sudo cat                                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m04_ha-064080.txt                                 |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m02:/home/docker/cp-test_ha-064080-m04_ha-064080-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080-m02 sudo cat                                          | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m04_ha-064080-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m03:/home/docker/cp-test_ha-064080-m04_ha-064080-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n                                                                 | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | ha-064080-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-064080 ssh -n ha-064080-m03 sudo cat                                          | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC | 17 Jun 24 11:04 UTC |
	|         | /home/docker/cp-test_ha-064080-m04_ha-064080-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-064080 node stop m02 -v=7                                                     | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-064080 node start m02 -v=7                                                    | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-064080 -v=7                                                           | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-064080 -v=7                                                                | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-064080 --wait=true -v=7                                                    | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:09 UTC | 17 Jun 24 11:13 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-064080                                                                | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:13 UTC |                     |
	| node    | ha-064080 node delete m03 -v=7                                                   | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:13 UTC | 17 Jun 24 11:14 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-064080 stop -v=7                                                              | ha-064080 | jenkins | v1.33.1 | 17 Jun 24 11:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 11:09:48
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 11:09:48.398657  136825 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:09:48.398794  136825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:09:48.398806  136825 out.go:304] Setting ErrFile to fd 2...
	I0617 11:09:48.398812  136825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:09:48.398980  136825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:09:48.399493  136825 out.go:298] Setting JSON to false
	I0617 11:09:48.400491  136825 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":3135,"bootTime":1718619453,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:09:48.400554  136825 start.go:139] virtualization: kvm guest
	I0617 11:09:48.402812  136825 out.go:177] * [ha-064080] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:09:48.404007  136825 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:09:48.405218  136825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:09:48.404022  136825 notify.go:220] Checking for updates...
	I0617 11:09:48.407561  136825 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:09:48.408735  136825 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:09:48.409902  136825 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:09:48.411124  136825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:09:48.412831  136825 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:09:48.412921  136825 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:09:48.413381  136825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:09:48.413450  136825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:09:48.430477  136825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36189
	I0617 11:09:48.430912  136825 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:09:48.431504  136825 main.go:141] libmachine: Using API Version  1
	I0617 11:09:48.431527  136825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:09:48.431887  136825 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:09:48.432068  136825 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:09:48.465385  136825 out.go:177] * Using the kvm2 driver based on existing profile
	I0617 11:09:48.466712  136825 start.go:297] selected driver: kvm2
	I0617 11:09:48.466724  136825 start.go:901] validating driver "kvm2" against &{Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.167 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:09:48.466854  136825 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:09:48.467207  136825 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:09:48.467271  136825 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 11:09:48.481316  136825 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 11:09:48.482008  136825 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:09:48.482042  136825 cni.go:84] Creating CNI manager for ""
	I0617 11:09:48.482049  136825 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0617 11:09:48.482101  136825 start.go:340] cluster config:
	{Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.167 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:09:48.482206  136825 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:09:48.484702  136825 out.go:177] * Starting "ha-064080" primary control-plane node in "ha-064080" cluster
	I0617 11:09:48.485829  136825 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:09:48.485869  136825 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 11:09:48.485883  136825 cache.go:56] Caching tarball of preloaded images
	I0617 11:09:48.485972  136825 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:09:48.485984  136825 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 11:09:48.486110  136825 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/config.json ...
	I0617 11:09:48.486312  136825 start.go:360] acquireMachinesLock for ha-064080: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:09:48.486365  136825 start.go:364] duration metric: took 34.608µs to acquireMachinesLock for "ha-064080"
	I0617 11:09:48.486385  136825 start.go:96] Skipping create...Using existing machine configuration
	I0617 11:09:48.486395  136825 fix.go:54] fixHost starting: 
	I0617 11:09:48.486685  136825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:09:48.486722  136825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:09:48.500157  136825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I0617 11:09:48.500566  136825 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:09:48.501073  136825 main.go:141] libmachine: Using API Version  1
	I0617 11:09:48.501096  136825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:09:48.501389  136825 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:09:48.501581  136825 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:09:48.501726  136825 main.go:141] libmachine: (ha-064080) Calling .GetState
	I0617 11:09:48.503158  136825 fix.go:112] recreateIfNeeded on ha-064080: state=Running err=<nil>
	W0617 11:09:48.503176  136825 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 11:09:48.505070  136825 out.go:177] * Updating the running kvm2 "ha-064080" VM ...
	I0617 11:09:48.506472  136825 machine.go:94] provisionDockerMachine start ...
	I0617 11:09:48.506490  136825 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:09:48.506671  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:09:48.508895  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.509343  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:09:48.509382  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.509481  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:09:48.509638  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:09:48.509796  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:09:48.509930  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:09:48.510157  136825 main.go:141] libmachine: Using SSH client type: native
	I0617 11:09:48.510342  136825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:09:48.510354  136825 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 11:09:48.617516  136825 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-064080
	
	I0617 11:09:48.617546  136825 main.go:141] libmachine: (ha-064080) Calling .GetMachineName
	I0617 11:09:48.617856  136825 buildroot.go:166] provisioning hostname "ha-064080"
	I0617 11:09:48.617891  136825 main.go:141] libmachine: (ha-064080) Calling .GetMachineName
	I0617 11:09:48.618151  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:09:48.620987  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.621373  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:09:48.621414  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.621518  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:09:48.621688  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:09:48.621849  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:09:48.621994  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:09:48.622176  136825 main.go:141] libmachine: Using SSH client type: native
	I0617 11:09:48.622436  136825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:09:48.622456  136825 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-064080 && echo "ha-064080" | sudo tee /etc/hostname
	I0617 11:09:48.740060  136825 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-064080
	
	I0617 11:09:48.740086  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:09:48.742610  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.743014  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:09:48.743058  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.743211  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:09:48.743396  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:09:48.743602  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:09:48.743744  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:09:48.743937  136825 main.go:141] libmachine: Using SSH client type: native
	I0617 11:09:48.744126  136825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:09:48.744141  136825 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-064080' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-064080/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-064080' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 11:09:48.844529  136825 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:09:48.844563  136825 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 11:09:48.844587  136825 buildroot.go:174] setting up certificates
	I0617 11:09:48.844604  136825 provision.go:84] configureAuth start
	I0617 11:09:48.844616  136825 main.go:141] libmachine: (ha-064080) Calling .GetMachineName
	I0617 11:09:48.844960  136825 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:09:48.847691  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.848113  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:09:48.848146  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.848271  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:09:48.850240  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.850507  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:09:48.850536  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.850709  136825 provision.go:143] copyHostCerts
	I0617 11:09:48.850745  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:09:48.850804  136825 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 11:09:48.850813  136825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:09:48.850900  136825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 11:09:48.851032  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:09:48.851058  136825 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 11:09:48.851063  136825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:09:48.851097  136825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 11:09:48.851155  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:09:48.851171  136825 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 11:09:48.851178  136825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:09:48.851206  136825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 11:09:48.851264  136825 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.ha-064080 san=[127.0.0.1 192.168.39.134 ha-064080 localhost minikube]
	I0617 11:09:48.938016  136825 provision.go:177] copyRemoteCerts
	I0617 11:09:48.938070  136825 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 11:09:48.938092  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:09:48.940751  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.941153  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:09:48.941180  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:48.941376  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:09:48.941577  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:09:48.941758  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:09:48.941938  136825 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:09:49.021447  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0617 11:09:49.021514  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 11:09:49.047065  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0617 11:09:49.047145  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0617 11:09:49.076594  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0617 11:09:49.076672  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0617 11:09:49.102893  136825 provision.go:87] duration metric: took 258.274028ms to configureAuth
	I0617 11:09:49.102919  136825 buildroot.go:189] setting minikube options for container-runtime
	I0617 11:09:49.103127  136825 config.go:182] Loaded profile config "ha-064080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:09:49.103194  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:09:49.105779  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:49.106191  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:09:49.106221  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:09:49.106394  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:09:49.106653  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:09:49.106864  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:09:49.107061  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:09:49.107255  136825 main.go:141] libmachine: Using SSH client type: native
	I0617 11:09:49.107425  136825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:09:49.107440  136825 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 11:11:19.898652  136825 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 11:11:19.898684  136825 machine.go:97] duration metric: took 1m31.392195992s to provisionDockerMachine
	I0617 11:11:19.898696  136825 start.go:293] postStartSetup for "ha-064080" (driver="kvm2")
	I0617 11:11:19.898709  136825 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 11:11:19.898735  136825 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:11:19.899122  136825 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 11:11:19.899162  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:11:19.902350  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:19.902763  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:11:19.902790  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:19.903012  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:11:19.903192  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:11:19.903362  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:11:19.903501  136825 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:11:19.982868  136825 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 11:11:19.987375  136825 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 11:11:19.987401  136825 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 11:11:19.987502  136825 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 11:11:19.987617  136825 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 11:11:19.987632  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /etc/ssl/certs/1201742.pem
	I0617 11:11:19.987747  136825 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 11:11:19.996600  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:11:20.020625  136825 start.go:296] duration metric: took 121.913621ms for postStartSetup
	I0617 11:11:20.020673  136825 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:11:20.020960  136825 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0617 11:11:20.020990  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:11:20.023687  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.024037  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:11:20.024065  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.024190  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:11:20.024367  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:11:20.024546  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:11:20.024667  136825 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	W0617 11:11:20.105446  136825 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0617 11:11:20.105477  136825 fix.go:56] duration metric: took 1m31.619083719s for fixHost
	I0617 11:11:20.105497  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:11:20.107862  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.108257  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:11:20.108282  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.108458  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:11:20.108626  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:11:20.108808  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:11:20.108937  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:11:20.109138  136825 main.go:141] libmachine: Using SSH client type: native
	I0617 11:11:20.109345  136825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0617 11:11:20.109368  136825 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 11:11:20.208456  136825 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718622680.169217752
	
	I0617 11:11:20.208478  136825 fix.go:216] guest clock: 1718622680.169217752
	I0617 11:11:20.208487  136825 fix.go:229] Guest: 2024-06-17 11:11:20.169217752 +0000 UTC Remote: 2024-06-17 11:11:20.10548393 +0000 UTC m=+91.742439711 (delta=63.733822ms)
	I0617 11:11:20.208513  136825 fix.go:200] guest clock delta is within tolerance: 63.733822ms
	I0617 11:11:20.208519  136825 start.go:83] releasing machines lock for "ha-064080", held for 1m31.722142757s
	I0617 11:11:20.208544  136825 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:11:20.208788  136825 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:11:20.211255  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.211662  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:11:20.211690  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.211823  136825 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:11:20.212481  136825 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:11:20.212678  136825 main.go:141] libmachine: (ha-064080) Calling .DriverName
	I0617 11:11:20.212781  136825 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 11:11:20.212839  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:11:20.212889  136825 ssh_runner.go:195] Run: cat /version.json
	I0617 11:11:20.212910  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHHostname
	I0617 11:11:20.215256  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.215504  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.215695  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:11:20.215716  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.215863  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:11:20.215889  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:20.215895  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:11:20.216060  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHPort
	I0617 11:11:20.216080  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:11:20.216233  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHKeyPath
	I0617 11:11:20.216265  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:11:20.216378  136825 main.go:141] libmachine: (ha-064080) Calling .GetSSHUsername
	I0617 11:11:20.216371  136825 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:11:20.216501  136825 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/ha-064080/id_rsa Username:docker}
	I0617 11:11:20.319560  136825 ssh_runner.go:195] Run: systemctl --version
	I0617 11:11:20.325852  136825 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 11:11:20.505163  136825 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 11:11:20.511134  136825 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 11:11:20.511188  136825 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 11:11:20.520615  136825 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0617 11:11:20.520633  136825 start.go:494] detecting cgroup driver to use...
	I0617 11:11:20.520681  136825 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 11:11:20.537682  136825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 11:11:20.551057  136825 docker.go:217] disabling cri-docker service (if available) ...
	I0617 11:11:20.551115  136825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 11:11:20.564293  136825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 11:11:20.577281  136825 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 11:11:20.726978  136825 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 11:11:20.876573  136825 docker.go:233] disabling docker service ...
	I0617 11:11:20.876636  136825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 11:11:20.895391  136825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 11:11:20.909474  136825 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 11:11:21.070275  136825 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 11:11:21.250399  136825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 11:11:21.264267  136825 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 11:11:21.282202  136825 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 11:11:21.282264  136825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:11:21.292901  136825 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 11:11:21.292949  136825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:11:21.303384  136825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:11:21.313602  136825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:11:21.323967  136825 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 11:11:21.334740  136825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:11:21.345022  136825 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:11:21.355648  136825 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:11:21.365991  136825 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 11:11:21.375675  136825 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 11:11:21.385536  136825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:11:21.528128  136825 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 11:11:23.210110  136825 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.681937282s)
	I0617 11:11:23.210142  136825 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 11:11:23.210186  136825 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 11:11:23.215553  136825 start.go:562] Will wait 60s for crictl version
	I0617 11:11:23.215608  136825 ssh_runner.go:195] Run: which crictl
	I0617 11:11:23.219705  136825 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 11:11:23.264419  136825 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 11:11:23.264503  136825 ssh_runner.go:195] Run: crio --version
	I0617 11:11:23.293111  136825 ssh_runner.go:195] Run: crio --version
	I0617 11:11:23.323271  136825 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 11:11:23.324552  136825 main.go:141] libmachine: (ha-064080) Calling .GetIP
	I0617 11:11:23.326833  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:23.327178  136825 main.go:141] libmachine: (ha-064080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:48:a9", ip: ""} in network mk-ha-064080: {Iface:virbr1 ExpiryTime:2024-06-17 12:00:06 +0000 UTC Type:0 Mac:52:54:00:bd:48:a9 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-064080 Clientid:01:52:54:00:bd:48:a9}
	I0617 11:11:23.327203  136825 main.go:141] libmachine: (ha-064080) DBG | domain ha-064080 has defined IP address 192.168.39.134 and MAC address 52:54:00:bd:48:a9 in network mk-ha-064080
	I0617 11:11:23.327410  136825 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 11:11:23.332149  136825 kubeadm.go:877] updating cluster {Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.167 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 11:11:23.332289  136825 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:11:23.332325  136825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:11:23.380072  136825 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 11:11:23.380093  136825 crio.go:433] Images already preloaded, skipping extraction
	I0617 11:11:23.380138  136825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:11:23.414747  136825 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 11:11:23.414770  136825 cache_images.go:84] Images are preloaded, skipping loading
	I0617 11:11:23.414778  136825 kubeadm.go:928] updating node { 192.168.39.134 8443 v1.30.1 crio true true} ...
	I0617 11:11:23.414913  136825 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-064080 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 11:11:23.414977  136825 ssh_runner.go:195] Run: crio config
	I0617 11:11:23.463011  136825 cni.go:84] Creating CNI manager for ""
	I0617 11:11:23.463033  136825 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0617 11:11:23.463044  136825 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 11:11:23.463078  136825 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.134 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-064080 NodeName:ha-064080 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 11:11:23.463243  136825 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-064080"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 11:11:23.463277  136825 kube-vip.go:115] generating kube-vip config ...
	I0617 11:11:23.463327  136825 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0617 11:11:23.475045  136825 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0617 11:11:23.475167  136825 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0617 11:11:23.475230  136825 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 11:11:23.484739  136825 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 11:11:23.484793  136825 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0617 11:11:23.494187  136825 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0617 11:11:23.510953  136825 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 11:11:23.526999  136825 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0617 11:11:23.542733  136825 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0617 11:11:23.560265  136825 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0617 11:11:23.564017  136825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:11:23.705555  136825 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:11:23.719783  136825 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080 for IP: 192.168.39.134
	I0617 11:11:23.719803  136825 certs.go:194] generating shared ca certs ...
	I0617 11:11:23.719818  136825 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:11:23.719971  136825 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 11:11:23.720015  136825 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 11:11:23.720026  136825 certs.go:256] generating profile certs ...
	I0617 11:11:23.720103  136825 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/client.key
	I0617 11:11:23.720130  136825 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.3d13451a
	I0617 11:11:23.720142  136825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.3d13451a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.134 192.168.39.104 192.168.39.168 192.168.39.254]
	I0617 11:11:23.926959  136825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.3d13451a ...
	I0617 11:11:23.927002  136825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.3d13451a: {Name:mkf23db52fa0c37b45a16435638efd3e756c2a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:11:23.927191  136825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.3d13451a ...
	I0617 11:11:23.927210  136825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.3d13451a: {Name:mkf5edeefbcc11f117bfa6526f88a192808900e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:11:23.927301  136825 certs.go:381] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt.3d13451a -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt
	I0617 11:11:23.927477  136825 certs.go:385] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key.3d13451a -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key
	I0617 11:11:23.927629  136825 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key
	I0617 11:11:23.927653  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0617 11:11:23.927673  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0617 11:11:23.927687  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0617 11:11:23.927700  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0617 11:11:23.927710  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0617 11:11:23.927722  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0617 11:11:23.927739  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0617 11:11:23.927757  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0617 11:11:23.927825  136825 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 11:11:23.927889  136825 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 11:11:23.927906  136825 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 11:11:23.927936  136825 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 11:11:23.927971  136825 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 11:11:23.928002  136825 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 11:11:23.928254  136825 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:11:23.928324  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /usr/share/ca-certificates/1201742.pem
	I0617 11:11:23.928348  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:11:23.928362  136825 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem -> /usr/share/ca-certificates/120174.pem
	I0617 11:11:23.928936  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 11:11:23.954131  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 11:11:23.976866  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 11:11:23.999265  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 11:11:24.023308  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0617 11:11:24.047549  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 11:11:24.071380  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 11:11:24.094788  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/ha-064080/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 11:11:24.118243  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 11:11:24.140799  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 11:11:24.165257  136825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 11:11:24.188154  136825 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 11:11:24.204889  136825 ssh_runner.go:195] Run: openssl version
	I0617 11:11:24.210760  136825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 11:11:24.221994  136825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 11:11:24.226407  136825 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 11:11:24.226454  136825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 11:11:24.232080  136825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 11:11:24.241593  136825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 11:11:24.252466  136825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:11:24.256853  136825 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:11:24.256913  136825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:11:24.262416  136825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 11:11:24.271844  136825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 11:11:24.282304  136825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 11:11:24.286591  136825 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 11:11:24.286628  136825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 11:11:24.292288  136825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 11:11:24.302081  136825 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 11:11:24.306822  136825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 11:11:24.312261  136825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 11:11:24.317671  136825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 11:11:24.323215  136825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 11:11:24.328591  136825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 11:11:24.333854  136825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 11:11:24.339230  136825 kubeadm.go:391] StartCluster: {Name:ha-064080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-064080 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.167 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:11:24.339343  136825 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 11:11:24.339405  136825 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 11:11:24.386312  136825 cri.go:89] found id: "13dfc1d97da1ebe900004edc6f66944d67700c68bd776eb13ec1978e93be17c2"
	I0617 11:11:24.386341  136825 cri.go:89] found id: "90b32e823ebff65269c551766dabf0cfadd610d5a60174b2cf6d05f71a5c3178"
	I0617 11:11:24.386347  136825 cri.go:89] found id: "657d9008773f965813c77834ba72f323f34991f4cc2084fd22cb4542a2b16b8c"
	I0617 11:11:24.386351  136825 cri.go:89] found id: "c160ee1bc36a7d933b526f7ada2eb852e6f7e39ca8b4a45842d978857dcabe69"
	I0617 11:11:24.386354  136825 cri.go:89] found id: "c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328"
	I0617 11:11:24.386359  136825 cri.go:89] found id: "10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c"
	I0617 11:11:24.386363  136825 cri.go:89] found id: "bb9fa67df5a3f15517f0cc5493139c9ec692bbadbef748f1315698a8ae05601f"
	I0617 11:11:24.386367  136825 cri.go:89] found id: "8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb"
	I0617 11:11:24.386371  136825 cri.go:89] found id: "24495c319c5c94afe6d0b59a3e9bc367b4539472e5846002db4fc1b802fac288"
	I0617 11:11:24.386380  136825 cri.go:89] found id: "ddf5516bbfc1d7ca0c4a0ebc2026888f4c7754891f8a6cfa30b49ea80c4c6a1b"
	I0617 11:11:24.386384  136825 cri.go:89] found id: "be01152b9ab18f70b88322e4262f33d332dd8aa951d6262c8ac130261de6479d"
	I0617 11:11:24.386389  136825 cri.go:89] found id: "ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad"
	I0617 11:11:24.386396  136825 cri.go:89] found id: "60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328"
	I0617 11:11:24.386400  136825 cri.go:89] found id: ""
	I0617 11:11:24.386452  136825 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.384954728Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718622985384931710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6bac497-f1fe-4025-951b-cd899cf8eabf name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.385567396Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=858ace28-3179-4990-9d7c-a596658d1699 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.385646683Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=858ace28-3179-4990-9d7c-a596658d1699 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.386224402Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e968a7b99037fcd74cf96493f10b9e4b77571018045daa12bfa9faff24036da,PodSandboxId:c45cf10a39aca992e1f5aa28059659a69562166e059624924f451c30bc5f471d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718622771112810041,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f0bf1df40b97298fbc6f99a56b7f3d186bd75d4b0e97bffa9597b8c140f0fd,PodSandboxId:b48fa28479a6b2939fe045cf9861144e401584f195777c0c07873597a11f30f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718622768117408502,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea168e43c8f58b627333f8db1fcab727151d0170538dd365a0ff2c14a670bc63,PodSandboxId:50c5f620e07d97bc6144ac71edf1a67807c6842ce54f118f971940733bc57c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718622732111313128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kubernetes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cca27b47119ee9b81f6755dc162135ff2de0238a503b8d7d8cd565cc8ddcaa9,PodSandboxId:882a4867b4a9f2d5466eb06baf2539f28edbdaedfada0afe6ff83a0002c0b4a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718622730116363577,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de1cbf3c4abe51b334ea608a299a78e7432c29baa71f527ba9b0e80bc238e68,PodSandboxId:052b729b7698a17e4b1d8bc05ee4c1ad4bbaa5ecb7a38010e8567c72d58bd82b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718622723419643562,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kubernetes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5831ea6ee0c390e7ce915655860ab50d35ab3dd5fecf6fafbe17b03a4020ba0a,PodSandboxId:b48fa28479a6b2939fe045cf9861144e401584f195777c0c07873597a11f30f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718622722115437741,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a7f74758193741ac9405c51c090135a9aeeeaf838bb9952389a636257a739b1,PodSandboxId:ff17fb7580a6762426d9bec4e02efcd5b13bcef21bdb6fe8667300f333069ae3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718622705731418482,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 329ab0752894b263f1a7ed6bf158ec63,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:acee1942b213b3a339a1c92af2a70571f5c7f4b96158320c3bb8f3f74d86a0b2,PodSandboxId:e2f59a6f0a7e947778f7ada7cd976150f38cf96e757e387f28c1c17b68a66e6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718622690616082861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af9cf34
4f6b524475b47fa29673012301a355ef88398883d01606aee8cc859c,PodSandboxId:c45cf10a39aca992e1f5aa28059659a69562166e059624924f451c30bc5f471d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718622690194438452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35caf65c401c62e881ba25e31f1b5557a5c63db1d4d
4b79fb6d39ac686f2f793,PodSandboxId:cdaaced76b5679b4562f428a51ab37be2ca4a12572247e3130f0014d63ea3d28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622690204612897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14481314a9356f5bb099d6096ca03ef8ec9cb15637652261b4359c32f1cbceb,PodSandboxId:653510d585e1e22c3324c23f750efa6a5723329d64904d9d5d69af3a21d78ceb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622690245696256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbcac1da73105615cd555b19ec3b51e43dc6fd5ee233f83d19dcaa41a1b5ee,PodSandboxId:c26f503c4b22bb1c768452d6f133e61204cc91bd4832a832e736bf582e184777,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718622690027763794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603
afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9f362fab2deb3901ab9bb43f8da39d89a6b6ff1f7413040ba94079dba2f359,PodSandboxId:ed543d06a893e980fd5b345a82719e29c73e8f4fad280b46dd7e7ada6719a6dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718622690078883387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a32b0b77a472f149702c6af5025c8bce824feadd95de75493b9a7c7da94010a,PodSandboxId:882a4867b4a9f2d5466eb06baf2539f28edbdaedfada0afe6ff83a0002c0b4a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718622689888922989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[
string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9062f80f59bb01cd3d133ee66a6cf66b83b310d47589d9e9eeb07982548f74,PodSandboxId:50c5f620e07d97bc6144ac71edf1a67807c6842ce54f118f971940733bc57c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718622689922770068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kuber
netes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a562b9195d78591133b90abc121faa5dbf34feac5066f4f821669a5b8c27e85,PodSandboxId:32924073f320b5367b28757d06fe232b7af64ccf6539c044b32541c03c8b9cc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718622197449787025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kuberne
tes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328,PodSandboxId:20be829b9ffef66a57eb936abd30f0a0daa6277806fc399919edde5c9193aa94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718622049380205795,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c,PodSandboxId:54a9c95a1ef70b178265a9c78e9dbcddfb9f8cb7ddc312e0e324a4f449b6ebc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718622049373025796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb,PodSandboxId:78661140f722ccccbbef01859ed0a403a118690cd55dd92f4d2cf08d1c03af3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718622045688275770,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad,PodSandboxId:7293d250b3e0dd840434d7afd153d17ac7842ec4f356edd9bac3f40f6603de1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718622025701449939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[string]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328,PodSandboxId:cb4974ce47c357bdbcfd6dd322289bd64cf2cbb3c4a7ad3e2ee523444ebfc04e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718622025592715823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=858ace28-3179-4990-9d7c-a596658d1699 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.428128948Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74d3ab60-c3dd-438a-bdb3-20e780955750 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.428267008Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74d3ab60-c3dd-438a-bdb3-20e780955750 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.429918132Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=183d23cf-10c1-4933-825b-9e74c4038fe5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.430353240Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718622985430330334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=183d23cf-10c1-4933-825b-9e74c4038fe5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.430994718Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c75fe4e-90c5-4fc5-b61a-971ef76bb5af name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.431070797Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c75fe4e-90c5-4fc5-b61a-971ef76bb5af name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.431455939Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e968a7b99037fcd74cf96493f10b9e4b77571018045daa12bfa9faff24036da,PodSandboxId:c45cf10a39aca992e1f5aa28059659a69562166e059624924f451c30bc5f471d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718622771112810041,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f0bf1df40b97298fbc6f99a56b7f3d186bd75d4b0e97bffa9597b8c140f0fd,PodSandboxId:b48fa28479a6b2939fe045cf9861144e401584f195777c0c07873597a11f30f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718622768117408502,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea168e43c8f58b627333f8db1fcab727151d0170538dd365a0ff2c14a670bc63,PodSandboxId:50c5f620e07d97bc6144ac71edf1a67807c6842ce54f118f971940733bc57c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718622732111313128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kubernetes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cca27b47119ee9b81f6755dc162135ff2de0238a503b8d7d8cd565cc8ddcaa9,PodSandboxId:882a4867b4a9f2d5466eb06baf2539f28edbdaedfada0afe6ff83a0002c0b4a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718622730116363577,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de1cbf3c4abe51b334ea608a299a78e7432c29baa71f527ba9b0e80bc238e68,PodSandboxId:052b729b7698a17e4b1d8bc05ee4c1ad4bbaa5ecb7a38010e8567c72d58bd82b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718622723419643562,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kubernetes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5831ea6ee0c390e7ce915655860ab50d35ab3dd5fecf6fafbe17b03a4020ba0a,PodSandboxId:b48fa28479a6b2939fe045cf9861144e401584f195777c0c07873597a11f30f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718622722115437741,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a7f74758193741ac9405c51c090135a9aeeeaf838bb9952389a636257a739b1,PodSandboxId:ff17fb7580a6762426d9bec4e02efcd5b13bcef21bdb6fe8667300f333069ae3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718622705731418482,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 329ab0752894b263f1a7ed6bf158ec63,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:acee1942b213b3a339a1c92af2a70571f5c7f4b96158320c3bb8f3f74d86a0b2,PodSandboxId:e2f59a6f0a7e947778f7ada7cd976150f38cf96e757e387f28c1c17b68a66e6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718622690616082861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af9cf34
4f6b524475b47fa29673012301a355ef88398883d01606aee8cc859c,PodSandboxId:c45cf10a39aca992e1f5aa28059659a69562166e059624924f451c30bc5f471d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718622690194438452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35caf65c401c62e881ba25e31f1b5557a5c63db1d4d
4b79fb6d39ac686f2f793,PodSandboxId:cdaaced76b5679b4562f428a51ab37be2ca4a12572247e3130f0014d63ea3d28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622690204612897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14481314a9356f5bb099d6096ca03ef8ec9cb15637652261b4359c32f1cbceb,PodSandboxId:653510d585e1e22c3324c23f750efa6a5723329d64904d9d5d69af3a21d78ceb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622690245696256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbcac1da73105615cd555b19ec3b51e43dc6fd5ee233f83d19dcaa41a1b5ee,PodSandboxId:c26f503c4b22bb1c768452d6f133e61204cc91bd4832a832e736bf582e184777,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718622690027763794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603
afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9f362fab2deb3901ab9bb43f8da39d89a6b6ff1f7413040ba94079dba2f359,PodSandboxId:ed543d06a893e980fd5b345a82719e29c73e8f4fad280b46dd7e7ada6719a6dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718622690078883387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a32b0b77a472f149702c6af5025c8bce824feadd95de75493b9a7c7da94010a,PodSandboxId:882a4867b4a9f2d5466eb06baf2539f28edbdaedfada0afe6ff83a0002c0b4a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718622689888922989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[
string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9062f80f59bb01cd3d133ee66a6cf66b83b310d47589d9e9eeb07982548f74,PodSandboxId:50c5f620e07d97bc6144ac71edf1a67807c6842ce54f118f971940733bc57c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718622689922770068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kuber
netes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a562b9195d78591133b90abc121faa5dbf34feac5066f4f821669a5b8c27e85,PodSandboxId:32924073f320b5367b28757d06fe232b7af64ccf6539c044b32541c03c8b9cc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718622197449787025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kuberne
tes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328,PodSandboxId:20be829b9ffef66a57eb936abd30f0a0daa6277806fc399919edde5c9193aa94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718622049380205795,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c,PodSandboxId:54a9c95a1ef70b178265a9c78e9dbcddfb9f8cb7ddc312e0e324a4f449b6ebc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718622049373025796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb,PodSandboxId:78661140f722ccccbbef01859ed0a403a118690cd55dd92f4d2cf08d1c03af3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718622045688275770,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad,PodSandboxId:7293d250b3e0dd840434d7afd153d17ac7842ec4f356edd9bac3f40f6603de1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718622025701449939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[string]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328,PodSandboxId:cb4974ce47c357bdbcfd6dd322289bd64cf2cbb3c4a7ad3e2ee523444ebfc04e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718622025592715823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c75fe4e-90c5-4fc5-b61a-971ef76bb5af name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.489644559Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a0f73e26-960b-4e48-b87c-f645b5676550 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.490152388Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a0f73e26-960b-4e48-b87c-f645b5676550 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.493048940Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a75f9be-fa4d-493b-a81e-5e2a83d3d15c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.493536327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718622985493509529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a75f9be-fa4d-493b-a81e-5e2a83d3d15c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.494826775Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5235e4fa-14ca-4f73-8394-52d48fd18e93 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.494989770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5235e4fa-14ca-4f73-8394-52d48fd18e93 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.495591094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e968a7b99037fcd74cf96493f10b9e4b77571018045daa12bfa9faff24036da,PodSandboxId:c45cf10a39aca992e1f5aa28059659a69562166e059624924f451c30bc5f471d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718622771112810041,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f0bf1df40b97298fbc6f99a56b7f3d186bd75d4b0e97bffa9597b8c140f0fd,PodSandboxId:b48fa28479a6b2939fe045cf9861144e401584f195777c0c07873597a11f30f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718622768117408502,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea168e43c8f58b627333f8db1fcab727151d0170538dd365a0ff2c14a670bc63,PodSandboxId:50c5f620e07d97bc6144ac71edf1a67807c6842ce54f118f971940733bc57c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718622732111313128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kubernetes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cca27b47119ee9b81f6755dc162135ff2de0238a503b8d7d8cd565cc8ddcaa9,PodSandboxId:882a4867b4a9f2d5466eb06baf2539f28edbdaedfada0afe6ff83a0002c0b4a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718622730116363577,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de1cbf3c4abe51b334ea608a299a78e7432c29baa71f527ba9b0e80bc238e68,PodSandboxId:052b729b7698a17e4b1d8bc05ee4c1ad4bbaa5ecb7a38010e8567c72d58bd82b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718622723419643562,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kubernetes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5831ea6ee0c390e7ce915655860ab50d35ab3dd5fecf6fafbe17b03a4020ba0a,PodSandboxId:b48fa28479a6b2939fe045cf9861144e401584f195777c0c07873597a11f30f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718622722115437741,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a7f74758193741ac9405c51c090135a9aeeeaf838bb9952389a636257a739b1,PodSandboxId:ff17fb7580a6762426d9bec4e02efcd5b13bcef21bdb6fe8667300f333069ae3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718622705731418482,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 329ab0752894b263f1a7ed6bf158ec63,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:acee1942b213b3a339a1c92af2a70571f5c7f4b96158320c3bb8f3f74d86a0b2,PodSandboxId:e2f59a6f0a7e947778f7ada7cd976150f38cf96e757e387f28c1c17b68a66e6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718622690616082861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af9cf34
4f6b524475b47fa29673012301a355ef88398883d01606aee8cc859c,PodSandboxId:c45cf10a39aca992e1f5aa28059659a69562166e059624924f451c30bc5f471d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718622690194438452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35caf65c401c62e881ba25e31f1b5557a5c63db1d4d
4b79fb6d39ac686f2f793,PodSandboxId:cdaaced76b5679b4562f428a51ab37be2ca4a12572247e3130f0014d63ea3d28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622690204612897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14481314a9356f5bb099d6096ca03ef8ec9cb15637652261b4359c32f1cbceb,PodSandboxId:653510d585e1e22c3324c23f750efa6a5723329d64904d9d5d69af3a21d78ceb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622690245696256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbcac1da73105615cd555b19ec3b51e43dc6fd5ee233f83d19dcaa41a1b5ee,PodSandboxId:c26f503c4b22bb1c768452d6f133e61204cc91bd4832a832e736bf582e184777,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718622690027763794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603
afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9f362fab2deb3901ab9bb43f8da39d89a6b6ff1f7413040ba94079dba2f359,PodSandboxId:ed543d06a893e980fd5b345a82719e29c73e8f4fad280b46dd7e7ada6719a6dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718622690078883387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a32b0b77a472f149702c6af5025c8bce824feadd95de75493b9a7c7da94010a,PodSandboxId:882a4867b4a9f2d5466eb06baf2539f28edbdaedfada0afe6ff83a0002c0b4a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718622689888922989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[
string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9062f80f59bb01cd3d133ee66a6cf66b83b310d47589d9e9eeb07982548f74,PodSandboxId:50c5f620e07d97bc6144ac71edf1a67807c6842ce54f118f971940733bc57c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718622689922770068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kuber
netes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a562b9195d78591133b90abc121faa5dbf34feac5066f4f821669a5b8c27e85,PodSandboxId:32924073f320b5367b28757d06fe232b7af64ccf6539c044b32541c03c8b9cc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718622197449787025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kuberne
tes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328,PodSandboxId:20be829b9ffef66a57eb936abd30f0a0daa6277806fc399919edde5c9193aa94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718622049380205795,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c,PodSandboxId:54a9c95a1ef70b178265a9c78e9dbcddfb9f8cb7ddc312e0e324a4f449b6ebc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718622049373025796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb,PodSandboxId:78661140f722ccccbbef01859ed0a403a118690cd55dd92f4d2cf08d1c03af3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718622045688275770,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad,PodSandboxId:7293d250b3e0dd840434d7afd153d17ac7842ec4f356edd9bac3f40f6603de1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718622025701449939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[string]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328,PodSandboxId:cb4974ce47c357bdbcfd6dd322289bd64cf2cbb3c4a7ad3e2ee523444ebfc04e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718622025592715823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5235e4fa-14ca-4f73-8394-52d48fd18e93 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.541432654Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=07f2b03f-43e0-4f29-8a46-092feb2df794 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.541512800Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07f2b03f-43e0-4f29-8a46-092feb2df794 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.542704350Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d224f21a-a870-4a34-b24c-a3281e1facac name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.543217740Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718622985543195960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d224f21a-a870-4a34-b24c-a3281e1facac name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.543832462Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3b411b1-8052-4ff2-bbac-fb4b20160b27 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.543953120Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3b411b1-8052-4ff2-bbac-fb4b20160b27 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:16:25 ha-064080 crio[3784]: time="2024-06-17 11:16:25.544404147Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e968a7b99037fcd74cf96493f10b9e4b77571018045daa12bfa9faff24036da,PodSandboxId:c45cf10a39aca992e1f5aa28059659a69562166e059624924f451c30bc5f471d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718622771112810041,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f0bf1df40b97298fbc6f99a56b7f3d186bd75d4b0e97bffa9597b8c140f0fd,PodSandboxId:b48fa28479a6b2939fe045cf9861144e401584f195777c0c07873597a11f30f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718622768117408502,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea168e43c8f58b627333f8db1fcab727151d0170538dd365a0ff2c14a670bc63,PodSandboxId:50c5f620e07d97bc6144ac71edf1a67807c6842ce54f118f971940733bc57c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718622732111313128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kubernetes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cca27b47119ee9b81f6755dc162135ff2de0238a503b8d7d8cd565cc8ddcaa9,PodSandboxId:882a4867b4a9f2d5466eb06baf2539f28edbdaedfada0afe6ff83a0002c0b4a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718622730116363577,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de1cbf3c4abe51b334ea608a299a78e7432c29baa71f527ba9b0e80bc238e68,PodSandboxId:052b729b7698a17e4b1d8bc05ee4c1ad4bbaa5ecb7a38010e8567c72d58bd82b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718622723419643562,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kubernetes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5831ea6ee0c390e7ce915655860ab50d35ab3dd5fecf6fafbe17b03a4020ba0a,PodSandboxId:b48fa28479a6b2939fe045cf9861144e401584f195777c0c07873597a11f30f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718622722115437741,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5646fca8-9ebc-47c1-b5ff-c87b0ed800d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75be2958,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a7f74758193741ac9405c51c090135a9aeeeaf838bb9952389a636257a739b1,PodSandboxId:ff17fb7580a6762426d9bec4e02efcd5b13bcef21bdb6fe8667300f333069ae3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718622705731418482,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 329ab0752894b263f1a7ed6bf158ec63,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:acee1942b213b3a339a1c92af2a70571f5c7f4b96158320c3bb8f3f74d86a0b2,PodSandboxId:e2f59a6f0a7e947778f7ada7cd976150f38cf96e757e387f28c1c17b68a66e6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718622690616082861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af9cf34
4f6b524475b47fa29673012301a355ef88398883d01606aee8cc859c,PodSandboxId:c45cf10a39aca992e1f5aa28059659a69562166e059624924f451c30bc5f471d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718622690194438452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-48mb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67422049-6637-4ca3-8bd1-2b47a265829d,},Annotations:map[string]string{io.kubernetes.container.hash: 6d02cd67,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35caf65c401c62e881ba25e31f1b5557a5c63db1d4d
4b79fb6d39ac686f2f793,PodSandboxId:cdaaced76b5679b4562f428a51ab37be2ca4a12572247e3130f0014d63ea3d28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622690204612897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14481314a9356f5bb099d6096ca03ef8ec9cb15637652261b4359c32f1cbceb,PodSandboxId:653510d585e1e22c3324c23f750efa6a5723329d64904d9d5d69af3a21d78ceb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718622690245696256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbcac1da73105615cd555b19ec3b51e43dc6fd5ee233f83d19dcaa41a1b5ee,PodSandboxId:c26f503c4b22bb1c768452d6f133e61204cc91bd4832a832e736bf582e184777,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718622690027763794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603
afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9f362fab2deb3901ab9bb43f8da39d89a6b6ff1f7413040ba94079dba2f359,PodSandboxId:ed543d06a893e980fd5b345a82719e29c73e8f4fad280b46dd7e7ada6719a6dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718622690078883387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a32b0b77a472f149702c6af5025c8bce824feadd95de75493b9a7c7da94010a,PodSandboxId:882a4867b4a9f2d5466eb06baf2539f28edbdaedfada0afe6ff83a0002c0b4a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718622689888922989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a91621493b7895ffb468d74d39c887,},Annotations:map[
string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9062f80f59bb01cd3d133ee66a6cf66b83b310d47589d9e9eeb07982548f74,PodSandboxId:50c5f620e07d97bc6144ac71edf1a67807c6842ce54f118f971940733bc57c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718622689922770068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21807c08d0f93f57866ad62dca0e176d,},Annotations:map[string]string{io.kuber
netes.container.hash: 8e9320c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a562b9195d78591133b90abc121faa5dbf34feac5066f4f821669a5b8c27e85,PodSandboxId:32924073f320b5367b28757d06fe232b7af64ccf6539c044b32541c03c8b9cc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718622197449787025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-89r9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1a8712a-2ef7-4400-98c9-5cee97c0d721,},Annotations:map[string]string{io.kuberne
tes.container.hash: 85c5faa6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328,PodSandboxId:20be829b9ffef66a57eb936abd30f0a0daa6277806fc399919edde5c9193aa94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718622049380205795,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be37a6ec-2a49-4a56-b8a3-0da865edb05d,},Annotations:map[string]string{io.kubernetes.container.hash: caa2bf79,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c,PodSandboxId:54a9c95a1ef70b178265a9c78e9dbcddfb9f8cb7ddc312e0e324a4f449b6ebc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718622049373025796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-zv99k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2453fd4-894d-4212-bc48-1803e28ddba8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9e113a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb,PodSandboxId:78661140f722ccccbbef01859ed0a403a118690cd55dd92f4d2cf08d1c03af3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718622045688275770,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dd48x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bd1d47-a8a5-47a5-820c-dd86f7ea7765,},Annotations:map[string]string{io.kubernetes.container.hash: 8b6be506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad,PodSandboxId:7293d250b3e0dd840434d7afd153d17ac7842ec4f356edd9bac3f40f6603de1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718622025701449939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ca5c8841cd25b2122df7e1cad8d883e,},Annotations:map[string]string{io.kubernetes.container.hash: a022c9c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328,PodSandboxId:cb4974ce47c357bdbcfd6dd322289bd64cf2cbb3c4a7ad3e2ee523444ebfc04e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718622025592715823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-064080,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99603afdeee0e2b8645e4cb7c5a1ed41,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3b411b1-8052-4ff2-bbac-fb4b20160b27 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7e968a7b99037       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      3 minutes ago       Running             kindnet-cni               3                   c45cf10a39aca       kindnet-48mb7
	38f0bf1df40b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   b48fa28479a6b       storage-provisioner
	ea168e43c8f58       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      4 minutes ago       Running             kube-apiserver            3                   50c5f620e07d9       kube-apiserver-ha-064080
	9cca27b47119e       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      4 minutes ago       Running             kube-controller-manager   2                   882a4867b4a9f       kube-controller-manager-ha-064080
	1de1cbf3c4abe       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   052b729b7698a       busybox-fc5497c4f-89r9v
	5831ea6ee0c39       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   b48fa28479a6b       storage-provisioner
	8a7f747581937       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   ff17fb7580a67       kube-vip-ha-064080
	acee1942b213b       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      4 minutes ago       Running             kube-proxy                1                   e2f59a6f0a7e9       kube-proxy-dd48x
	d14481314a935       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   653510d585e1e       coredns-7db6d8ff4d-zv99k
	35caf65c401c6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   cdaaced76b567       coredns-7db6d8ff4d-xbhnm
	4af9cf344f6b5       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      4 minutes ago       Exited              kindnet-cni               2                   c45cf10a39aca       kindnet-48mb7
	4c9f362fab2de       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   ed543d06a893e       etcd-ha-064080
	88dbcac1da731       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      4 minutes ago       Running             kube-scheduler            1                   c26f503c4b22b       kube-scheduler-ha-064080
	7e9062f80f59b       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      4 minutes ago       Exited              kube-apiserver            2                   50c5f620e07d9       kube-apiserver-ha-064080
	9a32b0b77a472       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      4 minutes ago       Exited              kube-controller-manager   1                   882a4867b4a9f       kube-controller-manager-ha-064080
	1a562b9195d78       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   32924073f320b       busybox-fc5497c4f-89r9v
	c3628888540ea       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   20be829b9ffef       coredns-7db6d8ff4d-xbhnm
	10061c1b3dd4f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   54a9c95a1ef70       coredns-7db6d8ff4d-zv99k
	8852bc2fd7b61       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      15 minutes ago      Exited              kube-proxy                0                   78661140f722c       kube-proxy-dd48x
	ecbb08a618aa7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      15 minutes ago      Exited              etcd                      0                   7293d250b3e0d       etcd-ha-064080
	60cc5a9cf6621       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      16 minutes ago      Exited              kube-scheduler            0                   cb4974ce47c35       kube-scheduler-ha-064080
	
	
	==> coredns [10061c1b3dd4f2865f83bf729b221fef3435324d6cef9ceb1a6631e0ccefa31c] <==
	[INFO] 10.244.1.2:47475 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117378s
	[INFO] 10.244.2.2:50417 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002227444s
	[INFO] 10.244.2.2:60625 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001284466s
	[INFO] 10.244.2.2:49631 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063512s
	[INFO] 10.244.2.2:60462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075059s
	[INFO] 10.244.2.2:55188 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061001s
	[INFO] 10.244.0.4:44285 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114934s
	[INFO] 10.244.0.4:41654 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082437s
	[INFO] 10.244.1.2:41564 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167707s
	[INFO] 10.244.1.2:48527 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199996s
	[INFO] 10.244.1.2:54645 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101253s
	[INFO] 10.244.1.2:46137 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161774s
	[INFO] 10.244.2.2:47749 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123256s
	[INFO] 10.244.2.2:44797 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155611s
	[INFO] 10.244.0.4:57514 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013406s
	[INFO] 10.244.1.2:57226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001349s
	[INFO] 10.244.1.2:38456 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000150623s
	[INFO] 10.244.1.2:34565 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000206574s
	[INFO] 10.244.2.2:55350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181312s
	[INFO] 10.244.2.2:54665 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000284418s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [35caf65c401c62e881ba25e31f1b5557a5c63db1d4d4b79fb6d39ac686f2f793] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:46720->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:46720->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:46742->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:46742->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [c3628888540ea5d9ce507b92a3b2e929cf72c29f17271ad882b6d18ce4cf6328] <==
	[INFO] 10.244.1.2:59121 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005103552s
	[INFO] 10.244.2.2:33690 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260726s
	[INFO] 10.244.2.2:40819 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103621s
	[INFO] 10.244.2.2:47624 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000173244s
	[INFO] 10.244.0.4:45570 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101008s
	[INFO] 10.244.0.4:38238 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096216s
	[INFO] 10.244.2.2:47491 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144426s
	[INFO] 10.244.2.2:57595 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010924s
	[INFO] 10.244.0.4:37645 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011472s
	[INFO] 10.244.0.4:40937 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000173334s
	[INFO] 10.244.0.4:38240 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010406s
	[INFO] 10.244.1.2:51662 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000104731s
	[INFO] 10.244.2.2:33365 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139748s
	[INFO] 10.244.2.2:44022 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000178435s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1822&timeout=5m20s&timeoutSeconds=320&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1825&timeout=7m54s&timeoutSeconds=474&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1826&timeout=8m38s&timeoutSeconds=518&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1825": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1825": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1822": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1822": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1826": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1826": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d14481314a9356f5bb099d6096ca03ef8ec9cb15637652261b4359c32f1cbceb] <==
	Trace[1154199320]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (11:11:49.136)
	Trace[1154199320]: [10.001056316s] [10.001056316s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1889676882]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jun-2024 11:11:39.574) (total time: 10001ms):
	Trace[1889676882]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:11:49.575)
	Trace[1889676882]: [10.001884902s] [10.001884902s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:54900->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:54900->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-064080
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-064080
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=ha-064080
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T11_00_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:00:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-064080
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:16:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:12:16 +0000   Mon, 17 Jun 2024 11:00:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:12:16 +0000   Mon, 17 Jun 2024 11:00:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:12:16 +0000   Mon, 17 Jun 2024 11:00:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:12:16 +0000   Mon, 17 Jun 2024 11:00:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    ha-064080
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f526834e1094a1798c2f7e5de014d6a
	  System UUID:                6f526834-e109-4a17-98c2-f7e5de014d6a
	  Boot ID:                    7c18f343-1055-464d-948c-cec47020ebb1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-89r9v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-xbhnm             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-zv99k             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-064080                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-48mb7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-064080             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-064080    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-dd48x                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-064080             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-064080                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m12s  kube-proxy       
	  Normal   Starting                 15m    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m    kubelet          Node ha-064080 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  15m    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 15m    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  15m    kubelet          Node ha-064080 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m    kubelet          Node ha-064080 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m    kubelet          Node ha-064080 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m    node-controller  Node ha-064080 event: Registered Node ha-064080 in Controller
	  Normal   NodeReady                15m    kubelet          Node ha-064080 status is now: NodeReady
	  Normal   RegisteredNode           14m    node-controller  Node ha-064080 event: Registered Node ha-064080 in Controller
	  Normal   RegisteredNode           13m    node-controller  Node ha-064080 event: Registered Node ha-064080 in Controller
	  Warning  ContainerGCFailed        5m53s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m5s   node-controller  Node ha-064080 event: Registered Node ha-064080 in Controller
	  Normal   RegisteredNode           3m59s  node-controller  Node ha-064080 event: Registered Node ha-064080 in Controller
	  Normal   RegisteredNode           3m7s   node-controller  Node ha-064080 event: Registered Node ha-064080 in Controller
	
	
	Name:               ha-064080-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-064080-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=ha-064080
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_17T11_01_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:01:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-064080-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:16:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:15:07 +0000   Mon, 17 Jun 2024 11:15:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:15:07 +0000   Mon, 17 Jun 2024 11:15:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:15:07 +0000   Mon, 17 Jun 2024 11:15:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:15:07 +0000   Mon, 17 Jun 2024 11:15:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-064080-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d22246006bf04dab820bccd210120c30
	  System UUID:                d2224600-6bf0-4dab-820b-ccd210120c30
	  Boot ID:                    aa828dd5-a30a-4797-914e-527263ce7397
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gf9j7                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-064080-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-7cqp4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-064080-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-064080-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-l55dg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-064080-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-064080-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 3m56s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-064080-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-064080-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-064080-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           14m                    node-controller  Node ha-064080-m02 event: Registered Node ha-064080-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-064080-m02 event: Registered Node ha-064080-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-064080-m02 event: Registered Node ha-064080-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-064080-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    4m38s (x8 over 4m38s)  kubelet          Node ha-064080-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m38s (x8 over 4m38s)  kubelet          Node ha-064080-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m38s (x7 over 4m38s)  kubelet          Node ha-064080-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-064080-m02 event: Registered Node ha-064080-m02 in Controller
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-064080-m02 event: Registered Node ha-064080-m02 in Controller
	  Normal  RegisteredNode           3m7s                   node-controller  Node ha-064080-m02 event: Registered Node ha-064080-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-064080-m02 status is now: NodeNotReady
	
	
	Name:               ha-064080-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-064080-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=ha-064080
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_17T11_03_52_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:03:51 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-064080-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:13:57 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 17 Jun 2024 11:13:37 +0000   Mon, 17 Jun 2024 11:14:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 17 Jun 2024 11:13:37 +0000   Mon, 17 Jun 2024 11:14:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 17 Jun 2024 11:13:37 +0000   Mon, 17 Jun 2024 11:14:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 17 Jun 2024 11:13:37 +0000   Mon, 17 Jun 2024 11:14:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.167
	  Hostname:    ha-064080-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 33fd5c3b11ee44e78fa203be011bc171
	  System UUID:                33fd5c3b-11ee-44e7-8fa2-03be011bc171
	  Boot ID:                    925277b4-c368-4ccd-aabf-2f2c6c24a726
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tgchz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-pn664              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-7t8b9           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-064080-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-064080-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-064080-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-064080-m04 event: Registered Node ha-064080-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-064080-m04 event: Registered Node ha-064080-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-064080-m04 event: Registered Node ha-064080-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-064080-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m6s                   node-controller  Node ha-064080-m04 event: Registered Node ha-064080-m04 in Controller
	  Normal   RegisteredNode           4m                     node-controller  Node ha-064080-m04 event: Registered Node ha-064080-m04 in Controller
	  Normal   RegisteredNode           3m8s                   node-controller  Node ha-064080-m04 event: Registered Node ha-064080-m04 in Controller
	  Warning  Rebooted                 2m49s                  kubelet          Node ha-064080-m04 has been rebooted, boot id: 925277b4-c368-4ccd-aabf-2f2c6c24a726
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m49s (x2 over 2m49s)  kubelet          Node ha-064080-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x2 over 2m49s)  kubelet          Node ha-064080-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x2 over 2m49s)  kubelet          Node ha-064080-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m49s                  kubelet          Node ha-064080-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s (x2 over 3m26s)   node-controller  Node ha-064080-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.883696] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.059639] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052376] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.200032] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.124990] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.278932] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.114561] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.787636] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.060887] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.333258] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.080001] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.043226] kauditd_printk_skb: 18 callbacks suppressed
	[ +14.410422] kauditd_printk_skb: 72 callbacks suppressed
	[Jun17 11:11] systemd-fstab-generator[3703]: Ignoring "noauto" option for root device
	[  +0.153522] systemd-fstab-generator[3715]: Ignoring "noauto" option for root device
	[  +0.191265] systemd-fstab-generator[3729]: Ignoring "noauto" option for root device
	[  +0.173329] systemd-fstab-generator[3741]: Ignoring "noauto" option for root device
	[  +0.291105] systemd-fstab-generator[3769]: Ignoring "noauto" option for root device
	[  +2.176722] systemd-fstab-generator[3870]: Ignoring "noauto" option for root device
	[  +5.948906] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.359694] kauditd_printk_skb: 85 callbacks suppressed
	[Jun17 11:12] kauditd_printk_skb: 6 callbacks suppressed
	[  +9.069097] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [4c9f362fab2deb3901ab9bb43f8da39d89a6b6ff1f7413040ba94079dba2f359] <==
	{"level":"info","ts":"2024-06-17T11:13:02.676494Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:13:02.677341Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"52887eb9b9b3603c","to":"426ee77ba77ea10d","stream-type":"stream Message"}
	{"level":"info","ts":"2024-06-17T11:13:02.677459Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:13:02.678649Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"52887eb9b9b3603c","to":"426ee77ba77ea10d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-06-17T11:13:02.678693Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:13:02.681653Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"warn","ts":"2024-06-17T11:13:02.688773Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.168:33224","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-06-17T11:13:05.610811Z","caller":"traceutil/trace.go:171","msg":"trace[1782809441] transaction","detail":"{read_only:false; response_revision:2289; number_of_response:1; }","duration":"167.23287ms","start":"2024-06-17T11:13:05.443565Z","end":"2024-06-17T11:13:05.610798Z","steps":["trace[1782809441] 'process raft request'  (duration: 165.20968ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T11:13:07.842053Z","caller":"traceutil/trace.go:171","msg":"trace[395437298] transaction","detail":"{read_only:false; response_revision:2308; number_of_response:1; }","duration":"118.759887ms","start":"2024-06-17T11:13:07.723274Z","end":"2024-06-17T11:13:07.842034Z","steps":["trace[395437298] 'process raft request'  (duration: 115.11517ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T11:13:51.754824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52887eb9b9b3603c switched to configuration voters=(5947142644092330044 10078358600003402433)"}
	{"level":"info","ts":"2024-06-17T11:13:51.756986Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"d3dad3a9a0ef02b3","local-member-id":"52887eb9b9b3603c","removed-remote-peer-id":"426ee77ba77ea10d","removed-remote-peer-urls":["https://192.168.39.168:2380"]}
	{"level":"info","ts":"2024-06-17T11:13:51.757076Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"warn","ts":"2024-06-17T11:13:51.75826Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:13:51.758318Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"warn","ts":"2024-06-17T11:13:51.758757Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:13:51.758807Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:13:51.758946Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"warn","ts":"2024-06-17T11:13:51.759195Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d","error":"context canceled"}
	{"level":"warn","ts":"2024-06-17T11:13:51.75925Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"426ee77ba77ea10d","error":"failed to read 426ee77ba77ea10d on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-06-17T11:13:51.759281Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"warn","ts":"2024-06-17T11:13:51.759399Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d","error":"context canceled"}
	{"level":"info","ts":"2024-06-17T11:13:51.759713Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:13:51.759998Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:13:51.760013Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"52887eb9b9b3603c","removed-remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:13:51.760132Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"52887eb9b9b3603c","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"426ee77ba77ea10d"}
	
	
	==> etcd [ecbb08a618aa76655e33c89e573535ed17f386cc522fcc35722eeb4ad859a1ad] <==
	{"level":"warn","ts":"2024-06-17T11:09:49.251717Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-17T11:09:48.548483Z","time spent":"703.231531ms","remote":"127.0.0.1:45142","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":0,"response size":0,"request content":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" limit:10000 "}
	2024/06/17 11:09:49 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-17T11:09:49.251732Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-17T11:09:48.557516Z","time spent":"694.208167ms","remote":"127.0.0.1:52168","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:10000 "}
	2024/06/17 11:09:49 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-17T11:09:49.251902Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-17T11:09:48.375101Z","time spent":"876.279984ms","remote":"127.0.0.1:52074","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:500 "}
	2024/06/17 11:09:49 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-06-17T11:09:49.311373Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"52887eb9b9b3603c","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-06-17T11:09:49.311593Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8bdd85bfd034bac1"}
	{"level":"info","ts":"2024-06-17T11:09:49.31163Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8bdd85bfd034bac1"}
	{"level":"info","ts":"2024-06-17T11:09:49.311652Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8bdd85bfd034bac1"}
	{"level":"info","ts":"2024-06-17T11:09:49.311755Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1"}
	{"level":"info","ts":"2024-06-17T11:09:49.311802Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1"}
	{"level":"info","ts":"2024-06-17T11:09:49.311832Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"52887eb9b9b3603c","remote-peer-id":"8bdd85bfd034bac1"}
	{"level":"info","ts":"2024-06-17T11:09:49.311917Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8bdd85bfd034bac1"}
	{"level":"info","ts":"2024-06-17T11:09:49.311924Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:09:49.311938Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:09:49.311972Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:09:49.31204Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:09:49.312064Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:09:49.312117Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"52887eb9b9b3603c","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:09:49.312144Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"426ee77ba77ea10d"}
	{"level":"info","ts":"2024-06-17T11:09:49.314751Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.134:2380"}
	{"level":"info","ts":"2024-06-17T11:09:49.314914Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.134:2380"}
	{"level":"info","ts":"2024-06-17T11:09:49.314964Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-064080","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.134:2380"],"advertise-client-urls":["https://192.168.39.134:2379"]}
	
	
	==> kernel <==
	 11:16:26 up 16 min,  0 users,  load average: 0.13, 0.29, 0.25
	Linux ha-064080 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4af9cf344f6b524475b47fa29673012301a355ef88398883d01606aee8cc859c] <==
	I0617 11:11:30.894804       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0617 11:11:30.903031       1 main.go:107] hostIP = 192.168.39.134
	podIP = 192.168.39.134
	I0617 11:11:30.903217       1 main.go:116] setting mtu 1500 for CNI 
	I0617 11:11:30.903261       1 main.go:146] kindnetd IP family: "ipv4"
	I0617 11:11:30.903423       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0617 11:11:34.095493       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0617 11:11:37.167418       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0617 11:11:48.169199       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0617 11:11:52.527327       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.251:46758->10.96.0.1:443: read: connection reset by peer
	I0617 11:11:55.600628       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kindnet [7e968a7b99037fcd74cf96493f10b9e4b77571018045daa12bfa9faff24036da] <==
	I0617 11:15:42.188600       1 main.go:250] Node ha-064080-m04 has CIDR [10.244.3.0/24] 
	I0617 11:15:52.261120       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0617 11:15:52.261166       1 main.go:227] handling current node
	I0617 11:15:52.261177       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0617 11:15:52.261182       1 main.go:250] Node ha-064080-m02 has CIDR [10.244.1.0/24] 
	I0617 11:15:52.261280       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I0617 11:15:52.261304       1 main.go:250] Node ha-064080-m04 has CIDR [10.244.3.0/24] 
	I0617 11:16:02.276054       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0617 11:16:02.276104       1 main.go:227] handling current node
	I0617 11:16:02.276114       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0617 11:16:02.276119       1 main.go:250] Node ha-064080-m02 has CIDR [10.244.1.0/24] 
	I0617 11:16:02.276252       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I0617 11:16:02.276273       1 main.go:250] Node ha-064080-m04 has CIDR [10.244.3.0/24] 
	I0617 11:16:12.283446       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0617 11:16:12.283498       1 main.go:227] handling current node
	I0617 11:16:12.283513       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0617 11:16:12.283520       1 main.go:250] Node ha-064080-m02 has CIDR [10.244.1.0/24] 
	I0617 11:16:12.283688       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I0617 11:16:12.283722       1 main.go:250] Node ha-064080-m04 has CIDR [10.244.3.0/24] 
	I0617 11:16:22.301796       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0617 11:16:22.301883       1 main.go:227] handling current node
	I0617 11:16:22.301895       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0617 11:16:22.301900       1 main.go:250] Node ha-064080-m02 has CIDR [10.244.1.0/24] 
	I0617 11:16:22.302043       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I0617 11:16:22.302067       1 main.go:250] Node ha-064080-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7e9062f80f59bb01cd3d133ee66a6cf66b83b310d47589d9e9eeb07982548f74] <==
	I0617 11:11:30.407093       1 options.go:221] external host was not specified, using 192.168.39.134
	I0617 11:11:30.409344       1 server.go:148] Version: v1.30.1
	I0617 11:11:30.409407       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:11:30.921365       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0617 11:11:30.921600       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0617 11:11:30.921979       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0617 11:11:30.922029       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0617 11:11:30.922252       1 instance.go:299] Using reconciler: lease
	W0617 11:11:50.916587       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0617 11:11:50.916662       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0617 11:11:50.923581       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [ea168e43c8f58b627333f8db1fcab727151d0170538dd365a0ff2c14a670bc63] <==
	I0617 11:12:14.009221       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0617 11:12:14.087260       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0617 11:12:14.087731       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0617 11:12:14.088176       1 shared_informer.go:320] Caches are synced for configmaps
	I0617 11:12:14.090302       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0617 11:12:14.090335       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0617 11:12:14.090514       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0617 11:12:14.097289       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0617 11:12:14.099219       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.104 192.168.39.168]
	I0617 11:12:14.105703       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0617 11:12:14.105823       1 aggregator.go:165] initial CRD sync complete...
	I0617 11:12:14.105945       1 autoregister_controller.go:141] Starting autoregister controller
	I0617 11:12:14.105978       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0617 11:12:14.106008       1 cache.go:39] Caches are synced for autoregister controller
	I0617 11:12:14.126955       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0617 11:12:14.133582       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0617 11:12:14.133636       1 policy_source.go:224] refreshing policies
	I0617 11:12:14.180377       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0617 11:12:14.202563       1 controller.go:615] quota admission added evaluator for: endpoints
	I0617 11:12:14.222752       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0617 11:12:14.228680       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0617 11:12:14.993917       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0617 11:12:15.447994       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.104 192.168.39.134 192.168.39.168]
	W0617 11:12:25.447760       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.104 192.168.39.134]
	W0617 11:14:05.452728       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.104 192.168.39.134]
	
	
	==> kube-controller-manager [9a32b0b77a472f149702c6af5025c8bce824feadd95de75493b9a7c7da94010a] <==
	I0617 11:11:31.314387       1 serving.go:380] Generated self-signed cert in-memory
	I0617 11:11:31.942563       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0617 11:11:31.942650       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:11:31.944246       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0617 11:11:31.944977       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0617 11:11:31.945068       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0617 11:11:31.945192       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0617 11:11:51.947407       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.134:8443/healthz\": dial tcp 192.168.39.134:8443: connect: connection refused"
	
	
	==> kube-controller-manager [9cca27b47119ee9b81f6755dc162135ff2de0238a503b8d7d8cd565cc8ddcaa9] <==
	I0617 11:14:40.938068       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.999313ms"
	I0617 11:14:40.938513       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.868µs"
	I0617 11:14:40.986975       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.666871ms"
	I0617 11:14:40.987081       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.821µs"
	E0617 11:14:46.926934       1 gc_controller.go:153] "Failed to get node" err="node \"ha-064080-m03\" not found" logger="pod-garbage-collector-controller" node="ha-064080-m03"
	E0617 11:14:46.926981       1 gc_controller.go:153] "Failed to get node" err="node \"ha-064080-m03\" not found" logger="pod-garbage-collector-controller" node="ha-064080-m03"
	E0617 11:14:46.926991       1 gc_controller.go:153] "Failed to get node" err="node \"ha-064080-m03\" not found" logger="pod-garbage-collector-controller" node="ha-064080-m03"
	E0617 11:14:46.926999       1 gc_controller.go:153] "Failed to get node" err="node \"ha-064080-m03\" not found" logger="pod-garbage-collector-controller" node="ha-064080-m03"
	E0617 11:14:46.927005       1 gc_controller.go:153] "Failed to get node" err="node \"ha-064080-m03\" not found" logger="pod-garbage-collector-controller" node="ha-064080-m03"
	I0617 11:14:46.941087       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-064080-m03"
	I0617 11:14:46.972568       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-064080-m03"
	I0617 11:14:46.972621       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-064080-m03"
	I0617 11:14:47.001020       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-064080-m03"
	I0617 11:14:47.001038       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-064080-m03"
	I0617 11:14:47.032171       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-064080-m03"
	I0617 11:14:47.032208       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-064080-m03"
	I0617 11:14:47.057725       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-064080-m03"
	I0617 11:14:47.057777       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5mg7w"
	I0617 11:14:47.085651       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5mg7w"
	I0617 11:14:47.085696       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-gsph4"
	I0617 11:14:47.118821       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-gsph4"
	I0617 11:14:47.118951       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-064080-m03"
	I0617 11:14:47.142536       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-064080-m03"
	I0617 11:15:07.576323       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.818913ms"
	I0617 11:15:07.577226       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.299µs"
	
	
	==> kube-proxy [8852bc2fd7b618e61e270006b27e8557aaf8230a9278a60245e25a23732a83eb] <==
	E0617 11:08:44.369915       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1708": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:08:47.441142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1826": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:08:47.441253       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1826": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:08:47.441335       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&resourceVersion=1804": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:08:47.441356       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&resourceVersion=1804": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:08:47.441775       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1708": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:08:47.441900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1708": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:08:53.585024       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1708": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:08:53.585220       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1708": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:08:53.585205       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&resourceVersion=1804": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:08:53.585277       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&resourceVersion=1804": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:08:53.585400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1826": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:08:53.585515       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1826": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:09:05.872591       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&resourceVersion=1804": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:09:05.872705       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&resourceVersion=1804": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:09:05.872804       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1708": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:09:05.872889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1708": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:09:05.872999       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1826": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:09:05.873040       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1826": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:09:27.376721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1826": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:09:27.377048       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1826": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:09:30.448549       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&resourceVersion=1804": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:09:30.448641       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&resourceVersion=1804": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:09:30.448736       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1708": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:09:30.448779       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1708": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [acee1942b213b3a339a1c92af2a70571f5c7f4b96158320c3bb8f3f74d86a0b2] <==
	I0617 11:12:13.343459       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0617 11:12:13.343563       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0617 11:12:13.343581       1 server_linux.go:165] "Using iptables Proxier"
	I0617 11:12:13.352597       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 11:12:13.352903       1 server.go:872] "Version info" version="v1.30.1"
	I0617 11:12:13.352950       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:12:13.354659       1 config.go:192] "Starting service config controller"
	I0617 11:12:13.356632       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 11:12:13.355998       1 config.go:319] "Starting node config controller"
	I0617 11:12:13.358024       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 11:12:13.355436       1 config.go:101] "Starting endpoint slice config controller"
	I0617 11:12:13.358053       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	W0617 11:12:16.338056       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:12:16.340474       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:12:16.340797       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:12:16.341042       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-064080&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0617 11:12:16.341274       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:12:16.341470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0617 11:12:16.341786       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0617 11:12:17.258296       1 shared_informer.go:320] Caches are synced for node config
	I0617 11:12:17.458404       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0617 11:12:17.858661       1 shared_informer.go:320] Caches are synced for service config
	W0617 11:14:50.475469       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0617 11:14:50.475945       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0617 11:14:50.476008       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [60cc5a9cf66217b34591b28809211824808cb7da50dd0c7971be5bd514e3b328] <==
	W0617 11:09:44.146274       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0617 11:09:44.146367       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 11:09:44.222545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0617 11:09:44.222678       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0617 11:09:45.975271       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0617 11:09:45.975375       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0617 11:09:46.352357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0617 11:09:46.352453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0617 11:09:46.505159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0617 11:09:46.505361       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0617 11:09:46.813113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0617 11:09:46.813204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0617 11:09:47.525074       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0617 11:09:47.525162       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0617 11:09:47.582519       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0617 11:09:47.582614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0617 11:09:47.634102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0617 11:09:47.634242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0617 11:09:48.231623       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0617 11:09:48.231716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0617 11:09:48.705640       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0617 11:09:48.705673       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0617 11:09:49.026601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0617 11:09:49.026689       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0617 11:09:49.208441       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [88dbcac1da73105615cd555b19ec3b51e43dc6fd5ee233f83d19dcaa41a1b5ee] <==
	W0617 11:12:09.863627       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.134:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0617 11:12:09.863671       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.134:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	W0617 11:12:10.317381       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.134:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0617 11:12:10.317483       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.134:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	W0617 11:12:10.514624       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.134:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0617 11:12:10.514718       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.134:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	W0617 11:12:10.567621       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.134:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0617 11:12:10.567707       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.134:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	W0617 11:12:10.918375       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.134:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0617 11:12:10.918516       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.134:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	W0617 11:12:11.501263       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.134:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0617 11:12:11.501355       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.134:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	W0617 11:12:11.793682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.134:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0617 11:12:11.793784       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.134:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	W0617 11:12:11.894618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.134:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0617 11:12:11.894718       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.134:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	W0617 11:12:14.010753       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0617 11:12:14.010803       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0617 11:12:14.010935       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0617 11:12:14.010975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0617 11:12:26.235916       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0617 11:13:48.445681       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-tgchz\": pod busybox-fc5497c4f-tgchz is already assigned to node \"ha-064080-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-tgchz" node="ha-064080-m04"
	E0617 11:13:48.445894       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6082bebc-530f-4022-8b3d-47251af7a193(default/busybox-fc5497c4f-tgchz) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-tgchz"
	E0617 11:13:48.445939       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-tgchz\": pod busybox-fc5497c4f-tgchz is already assigned to node \"ha-064080-m04\"" pod="default/busybox-fc5497c4f-tgchz"
	I0617 11:13:48.445966       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-tgchz" node="ha-064080-m04"
	
	
	==> kubelet <==
	Jun 17 11:12:32 ha-064080 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 11:12:36 ha-064080 kubelet[1371]: I0617 11:12:36.098480    1371 scope.go:117] "RemoveContainer" containerID="5831ea6ee0c390e7ce915655860ab50d35ab3dd5fecf6fafbe17b03a4020ba0a"
	Jun 17 11:12:36 ha-064080 kubelet[1371]: E0617 11:12:36.099188    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5646fca8-9ebc-47c1-b5ff-c87b0ed800d8)\"" pod="kube-system/storage-provisioner" podUID="5646fca8-9ebc-47c1-b5ff-c87b0ed800d8"
	Jun 17 11:12:39 ha-064080 kubelet[1371]: I0617 11:12:39.098298    1371 scope.go:117] "RemoveContainer" containerID="4af9cf344f6b524475b47fa29673012301a355ef88398883d01606aee8cc859c"
	Jun 17 11:12:39 ha-064080 kubelet[1371]: E0617 11:12:39.098619    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-48mb7_kube-system(67422049-6637-4ca3-8bd1-2b47a265829d)\"" pod="kube-system/kindnet-48mb7" podUID="67422049-6637-4ca3-8bd1-2b47a265829d"
	Jun 17 11:12:48 ha-064080 kubelet[1371]: I0617 11:12:48.099136    1371 scope.go:117] "RemoveContainer" containerID="5831ea6ee0c390e7ce915655860ab50d35ab3dd5fecf6fafbe17b03a4020ba0a"
	Jun 17 11:12:51 ha-064080 kubelet[1371]: I0617 11:12:51.098806    1371 scope.go:117] "RemoveContainer" containerID="4af9cf344f6b524475b47fa29673012301a355ef88398883d01606aee8cc859c"
	Jun 17 11:12:52 ha-064080 kubelet[1371]: I0617 11:12:52.099313    1371 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-064080" podUID="6b9259b1-ee46-4493-ba10-dcb32da03f57"
	Jun 17 11:12:52 ha-064080 kubelet[1371]: I0617 11:12:52.139831    1371 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-064080"
	Jun 17 11:13:02 ha-064080 kubelet[1371]: I0617 11:13:02.126031    1371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-064080" podStartSLOduration=10.12600127 podStartE2EDuration="10.12600127s" podCreationTimestamp="2024-06-17 11:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-17 11:13:02.125557036 +0000 UTC m=+750.189617414" watchObservedRunningTime="2024-06-17 11:13:02.12600127 +0000 UTC m=+750.190061644"
	Jun 17 11:13:32 ha-064080 kubelet[1371]: E0617 11:13:32.161528    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 11:13:32 ha-064080 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 11:13:32 ha-064080 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 11:13:32 ha-064080 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:13:32 ha-064080 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 11:14:32 ha-064080 kubelet[1371]: E0617 11:14:32.160602    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 11:14:32 ha-064080 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 11:14:32 ha-064080 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 11:14:32 ha-064080 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:14:32 ha-064080 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 11:15:32 ha-064080 kubelet[1371]: E0617 11:15:32.163557    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 11:15:32 ha-064080 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 11:15:32 ha-064080 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 11:15:32 ha-064080 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:15:32 ha-064080 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:16:25.083666  139146 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19084-112967/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-064080 -n ha-064080
helpers_test.go:261: (dbg) Run:  kubectl --context ha-064080 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (304.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-353869
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-353869
E0617 11:31:51.169697  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-353869: exit status 82 (2m1.942965461s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-353869-m03"  ...
	* Stopping node "multinode-353869-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-353869" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-353869 --wait=true -v=8 --alsologtostderr
E0617 11:33:57.397997  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-353869 --wait=true -v=8 --alsologtostderr: (3m0.576612252s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-353869
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-353869 -n multinode-353869
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-353869 logs -n 25: (1.51858421s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-353869 ssh -n                                                                 | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-353869 cp multinode-353869-m02:/home/docker/cp-test.txt                       | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2681374672/001/cp-test_multinode-353869-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n                                                                 | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-353869 cp multinode-353869-m02:/home/docker/cp-test.txt                       | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869:/home/docker/cp-test_multinode-353869-m02_multinode-353869.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n                                                                 | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n multinode-353869 sudo cat                                       | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | /home/docker/cp-test_multinode-353869-m02_multinode-353869.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-353869 cp multinode-353869-m02:/home/docker/cp-test.txt                       | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m03:/home/docker/cp-test_multinode-353869-m02_multinode-353869-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n                                                                 | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n multinode-353869-m03 sudo cat                                   | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | /home/docker/cp-test_multinode-353869-m02_multinode-353869-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-353869 cp testdata/cp-test.txt                                                | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n                                                                 | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-353869 cp multinode-353869-m03:/home/docker/cp-test.txt                       | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2681374672/001/cp-test_multinode-353869-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n                                                                 | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-353869 cp multinode-353869-m03:/home/docker/cp-test.txt                       | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869:/home/docker/cp-test_multinode-353869-m03_multinode-353869.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n                                                                 | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n multinode-353869 sudo cat                                       | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | /home/docker/cp-test_multinode-353869-m03_multinode-353869.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-353869 cp multinode-353869-m03:/home/docker/cp-test.txt                       | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m02:/home/docker/cp-test_multinode-353869-m03_multinode-353869-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n                                                                 | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n multinode-353869-m02 sudo cat                                   | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | /home/docker/cp-test_multinode-353869-m03_multinode-353869-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-353869 node stop m03                                                          | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	| node    | multinode-353869 node start                                                             | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:30 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-353869                                                                | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:30 UTC |                     |
	| stop    | -p multinode-353869                                                                     | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:30 UTC |                     |
	| start   | -p multinode-353869                                                                     | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:32 UTC | 17 Jun 24 11:35 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-353869                                                                | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 11:32:25
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 11:32:25.961279  148753 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:32:25.961517  148753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:32:25.961525  148753 out.go:304] Setting ErrFile to fd 2...
	I0617 11:32:25.961530  148753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:32:25.961698  148753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:32:25.962232  148753 out.go:298] Setting JSON to false
	I0617 11:32:25.963127  148753 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4493,"bootTime":1718619453,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:32:25.963186  148753 start.go:139] virtualization: kvm guest
	I0617 11:32:25.965356  148753 out.go:177] * [multinode-353869] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:32:25.966518  148753 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:32:25.967669  148753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:32:25.966545  148753 notify.go:220] Checking for updates...
	I0617 11:32:25.968928  148753 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:32:25.970099  148753 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:32:25.971414  148753 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:32:25.972692  148753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:32:25.974319  148753 config.go:182] Loaded profile config "multinode-353869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:32:25.974432  148753 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:32:25.974870  148753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:32:25.974925  148753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:32:25.990454  148753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0617 11:32:25.990892  148753 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:32:25.991408  148753 main.go:141] libmachine: Using API Version  1
	I0617 11:32:25.991430  148753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:32:25.991792  148753 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:32:25.992109  148753 main.go:141] libmachine: (multinode-353869) Calling .DriverName
	I0617 11:32:26.027385  148753 out.go:177] * Using the kvm2 driver based on existing profile
	I0617 11:32:26.028795  148753 start.go:297] selected driver: kvm2
	I0617 11:32:26.028821  148753 start.go:901] validating driver "kvm2" against &{Name:multinode-353869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-353869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.46 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:32:26.028975  148753 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:32:26.029329  148753 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:32:26.029409  148753 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 11:32:26.045358  148753 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 11:32:26.046115  148753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:32:26.046145  148753 cni.go:84] Creating CNI manager for ""
	I0617 11:32:26.046151  148753 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0617 11:32:26.046222  148753 start.go:340] cluster config:
	{Name:multinode-353869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-353869 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.46 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:32:26.046362  148753 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:32:26.048199  148753 out.go:177] * Starting "multinode-353869" primary control-plane node in "multinode-353869" cluster
	I0617 11:32:26.049406  148753 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:32:26.049439  148753 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 11:32:26.049455  148753 cache.go:56] Caching tarball of preloaded images
	I0617 11:32:26.049549  148753 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:32:26.049564  148753 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 11:32:26.049698  148753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/config.json ...
	I0617 11:32:26.049934  148753 start.go:360] acquireMachinesLock for multinode-353869: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:32:26.049990  148753 start.go:364] duration metric: took 34.941µs to acquireMachinesLock for "multinode-353869"
	I0617 11:32:26.050010  148753 start.go:96] Skipping create...Using existing machine configuration
	I0617 11:32:26.050018  148753 fix.go:54] fixHost starting: 
	I0617 11:32:26.050346  148753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:32:26.050390  148753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:32:26.064992  148753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36797
	I0617 11:32:26.065419  148753 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:32:26.065915  148753 main.go:141] libmachine: Using API Version  1
	I0617 11:32:26.065938  148753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:32:26.066259  148753 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:32:26.066456  148753 main.go:141] libmachine: (multinode-353869) Calling .DriverName
	I0617 11:32:26.066593  148753 main.go:141] libmachine: (multinode-353869) Calling .GetState
	I0617 11:32:26.068150  148753 fix.go:112] recreateIfNeeded on multinode-353869: state=Running err=<nil>
	W0617 11:32:26.068168  148753 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 11:32:26.070223  148753 out.go:177] * Updating the running kvm2 "multinode-353869" VM ...
	I0617 11:32:26.071432  148753 machine.go:94] provisionDockerMachine start ...
	I0617 11:32:26.071470  148753 main.go:141] libmachine: (multinode-353869) Calling .DriverName
	I0617 11:32:26.071674  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:32:26.074212  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.074637  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:32:26.074671  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.074816  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:32:26.075001  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:32:26.075151  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:32:26.075347  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:32:26.075537  148753 main.go:141] libmachine: Using SSH client type: native
	I0617 11:32:26.075716  148753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0617 11:32:26.075725  148753 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 11:32:26.192623  148753 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-353869
	
	I0617 11:32:26.192663  148753 main.go:141] libmachine: (multinode-353869) Calling .GetMachineName
	I0617 11:32:26.192907  148753 buildroot.go:166] provisioning hostname "multinode-353869"
	I0617 11:32:26.192932  148753 main.go:141] libmachine: (multinode-353869) Calling .GetMachineName
	I0617 11:32:26.193235  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:32:26.195603  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.196001  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:32:26.196040  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.196128  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:32:26.196294  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:32:26.196484  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:32:26.196637  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:32:26.196805  148753 main.go:141] libmachine: Using SSH client type: native
	I0617 11:32:26.196992  148753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0617 11:32:26.197010  148753 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-353869 && echo "multinode-353869" | sudo tee /etc/hostname
	I0617 11:32:26.327300  148753 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-353869
	
	I0617 11:32:26.327337  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:32:26.330135  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.330485  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:32:26.330515  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.330676  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:32:26.330870  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:32:26.331010  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:32:26.331149  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:32:26.331336  148753 main.go:141] libmachine: Using SSH client type: native
	I0617 11:32:26.331550  148753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0617 11:32:26.331567  148753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-353869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-353869/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-353869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 11:32:26.444506  148753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:32:26.444543  148753 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 11:32:26.444588  148753 buildroot.go:174] setting up certificates
	I0617 11:32:26.444597  148753 provision.go:84] configureAuth start
	I0617 11:32:26.444610  148753 main.go:141] libmachine: (multinode-353869) Calling .GetMachineName
	I0617 11:32:26.444922  148753 main.go:141] libmachine: (multinode-353869) Calling .GetIP
	I0617 11:32:26.447482  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.447841  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:32:26.447861  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.448025  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:32:26.449996  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.450370  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:32:26.450397  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.450498  148753 provision.go:143] copyHostCerts
	I0617 11:32:26.450530  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:32:26.450583  148753 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 11:32:26.450595  148753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:32:26.450670  148753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 11:32:26.450763  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:32:26.450788  148753 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 11:32:26.450794  148753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:32:26.450834  148753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 11:32:26.450895  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:32:26.450933  148753 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 11:32:26.450942  148753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:32:26.450976  148753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 11:32:26.451055  148753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.multinode-353869 san=[127.0.0.1 192.168.39.17 localhost minikube multinode-353869]
	I0617 11:32:26.887475  148753 provision.go:177] copyRemoteCerts
	I0617 11:32:26.887550  148753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 11:32:26.887582  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:32:26.890524  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.890883  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:32:26.890918  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.891108  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:32:26.891326  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:32:26.891513  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:32:26.891677  148753 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/multinode-353869/id_rsa Username:docker}
	I0617 11:32:26.977897  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0617 11:32:26.977974  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 11:32:27.002624  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0617 11:32:27.002692  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0617 11:32:27.027368  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0617 11:32:27.027435  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 11:32:27.051585  148753 provision.go:87] duration metric: took 606.973497ms to configureAuth
	I0617 11:32:27.051610  148753 buildroot.go:189] setting minikube options for container-runtime
	I0617 11:32:27.051850  148753 config.go:182] Loaded profile config "multinode-353869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:32:27.051959  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:32:27.055006  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:27.055367  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:32:27.055400  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:27.055622  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:32:27.055836  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:32:27.056062  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:32:27.056184  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:32:27.056389  148753 main.go:141] libmachine: Using SSH client type: native
	I0617 11:32:27.056567  148753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0617 11:32:27.056588  148753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 11:33:57.781443  148753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 11:33:57.781476  148753 machine.go:97] duration metric: took 1m31.710026688s to provisionDockerMachine
	I0617 11:33:57.781497  148753 start.go:293] postStartSetup for "multinode-353869" (driver="kvm2")
	I0617 11:33:57.781509  148753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 11:33:57.781532  148753 main.go:141] libmachine: (multinode-353869) Calling .DriverName
	I0617 11:33:57.781891  148753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 11:33:57.781930  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:33:57.785057  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:57.785539  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:33:57.785568  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:57.785720  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:33:57.785974  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:33:57.786154  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:33:57.786293  148753 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/multinode-353869/id_rsa Username:docker}
	I0617 11:33:57.875173  148753 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 11:33:57.879441  148753 command_runner.go:130] > NAME=Buildroot
	I0617 11:33:57.879477  148753 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0617 11:33:57.879483  148753 command_runner.go:130] > ID=buildroot
	I0617 11:33:57.879490  148753 command_runner.go:130] > VERSION_ID=2023.02.9
	I0617 11:33:57.879497  148753 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0617 11:33:57.879541  148753 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 11:33:57.879557  148753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 11:33:57.879624  148753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 11:33:57.879718  148753 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 11:33:57.879729  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /etc/ssl/certs/1201742.pem
	I0617 11:33:57.879857  148753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 11:33:57.888732  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:33:57.912636  148753 start.go:296] duration metric: took 131.122511ms for postStartSetup
	I0617 11:33:57.912677  148753 fix.go:56] duration metric: took 1m31.862660345s for fixHost
	I0617 11:33:57.912704  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:33:57.915219  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:57.915610  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:33:57.915655  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:57.915829  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:33:57.916155  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:33:57.916330  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:33:57.916464  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:33:57.916625  148753 main.go:141] libmachine: Using SSH client type: native
	I0617 11:33:57.916819  148753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0617 11:33:57.916833  148753 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 11:33:58.032688  148753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718624038.012948646
	
	I0617 11:33:58.032713  148753 fix.go:216] guest clock: 1718624038.012948646
	I0617 11:33:58.032720  148753 fix.go:229] Guest: 2024-06-17 11:33:58.012948646 +0000 UTC Remote: 2024-06-17 11:33:57.912682426 +0000 UTC m=+91.988111464 (delta=100.26622ms)
	I0617 11:33:58.032741  148753 fix.go:200] guest clock delta is within tolerance: 100.26622ms
	I0617 11:33:58.032745  148753 start.go:83] releasing machines lock for "multinode-353869", held for 1m31.982742914s
	I0617 11:33:58.032766  148753 main.go:141] libmachine: (multinode-353869) Calling .DriverName
	I0617 11:33:58.033031  148753 main.go:141] libmachine: (multinode-353869) Calling .GetIP
	I0617 11:33:58.035523  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:58.035877  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:33:58.035906  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:58.036066  148753 main.go:141] libmachine: (multinode-353869) Calling .DriverName
	I0617 11:33:58.036549  148753 main.go:141] libmachine: (multinode-353869) Calling .DriverName
	I0617 11:33:58.036725  148753 main.go:141] libmachine: (multinode-353869) Calling .DriverName
	I0617 11:33:58.036789  148753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 11:33:58.036849  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:33:58.036958  148753 ssh_runner.go:195] Run: cat /version.json
	I0617 11:33:58.036984  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:33:58.039633  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:58.039662  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:58.040009  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:33:58.040042  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:58.040069  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:33:58.040087  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:58.040127  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:33:58.040323  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:33:58.040324  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:33:58.040524  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:33:58.040527  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:33:58.040706  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:33:58.040698  148753 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/multinode-353869/id_rsa Username:docker}
	I0617 11:33:58.040804  148753 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/multinode-353869/id_rsa Username:docker}
	I0617 11:33:58.145679  148753 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0617 11:33:58.146469  148753 command_runner.go:130] > {"iso_version": "v1.33.1-1718047936-19044", "kicbase_version": "v0.0.44-1718016726-19044", "minikube_version": "v1.33.1", "commit": "8a07c05cb41cba41fd6bf6981cdae9c899c82330"}
	I0617 11:33:58.146637  148753 ssh_runner.go:195] Run: systemctl --version
	I0617 11:33:58.152680  148753 command_runner.go:130] > systemd 252 (252)
	I0617 11:33:58.152712  148753 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0617 11:33:58.153058  148753 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 11:33:58.322242  148753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0617 11:33:58.328696  148753 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0617 11:33:58.328745  148753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 11:33:58.328792  148753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 11:33:58.338502  148753 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0617 11:33:58.338522  148753 start.go:494] detecting cgroup driver to use...
	I0617 11:33:58.338580  148753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 11:33:58.356862  148753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 11:33:58.372505  148753 docker.go:217] disabling cri-docker service (if available) ...
	I0617 11:33:58.372556  148753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 11:33:58.386275  148753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 11:33:58.399935  148753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 11:33:58.540508  148753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 11:33:58.672719  148753 docker.go:233] disabling docker service ...
	I0617 11:33:58.672808  148753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 11:33:58.688012  148753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 11:33:58.701122  148753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 11:33:58.833952  148753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 11:33:58.975681  148753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 11:33:58.990913  148753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 11:33:59.010824  148753 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0617 11:33:59.010907  148753 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 11:33:59.010965  148753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:33:59.022161  148753 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 11:33:59.022228  148753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:33:59.033915  148753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:33:59.045436  148753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:33:59.056522  148753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 11:33:59.068798  148753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:33:59.079937  148753 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:33:59.091408  148753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:33:59.102243  148753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 11:33:59.111775  148753 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0617 11:33:59.111860  148753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 11:33:59.121433  148753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:33:59.271819  148753 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 11:34:05.518896  148753 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.247037845s)
	I0617 11:34:05.518926  148753 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 11:34:05.518981  148753 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 11:34:05.523890  148753 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0617 11:34:05.523921  148753 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0617 11:34:05.523931  148753 command_runner.go:130] > Device: 0,22	Inode: 1329        Links: 1
	I0617 11:34:05.523942  148753 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0617 11:34:05.523952  148753 command_runner.go:130] > Access: 2024-06-17 11:34:05.398911187 +0000
	I0617 11:34:05.523963  148753 command_runner.go:130] > Modify: 2024-06-17 11:34:05.398911187 +0000
	I0617 11:34:05.523975  148753 command_runner.go:130] > Change: 2024-06-17 11:34:05.398911187 +0000
	I0617 11:34:05.523981  148753 command_runner.go:130] >  Birth: -
	I0617 11:34:05.524003  148753 start.go:562] Will wait 60s for crictl version
	I0617 11:34:05.524051  148753 ssh_runner.go:195] Run: which crictl
	I0617 11:34:05.527736  148753 command_runner.go:130] > /usr/bin/crictl
	I0617 11:34:05.527797  148753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 11:34:05.567279  148753 command_runner.go:130] > Version:  0.1.0
	I0617 11:34:05.567305  148753 command_runner.go:130] > RuntimeName:  cri-o
	I0617 11:34:05.567335  148753 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0617 11:34:05.567346  148753 command_runner.go:130] > RuntimeApiVersion:  v1
	I0617 11:34:05.567367  148753 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 11:34:05.567436  148753 ssh_runner.go:195] Run: crio --version
	I0617 11:34:05.595921  148753 command_runner.go:130] > crio version 1.29.1
	I0617 11:34:05.595948  148753 command_runner.go:130] > Version:        1.29.1
	I0617 11:34:05.595956  148753 command_runner.go:130] > GitCommit:      unknown
	I0617 11:34:05.595963  148753 command_runner.go:130] > GitCommitDate:  unknown
	I0617 11:34:05.595975  148753 command_runner.go:130] > GitTreeState:   clean
	I0617 11:34:05.595984  148753 command_runner.go:130] > BuildDate:      2024-06-11T00:56:20Z
	I0617 11:34:05.595993  148753 command_runner.go:130] > GoVersion:      go1.21.6
	I0617 11:34:05.596008  148753 command_runner.go:130] > Compiler:       gc
	I0617 11:34:05.596018  148753 command_runner.go:130] > Platform:       linux/amd64
	I0617 11:34:05.596027  148753 command_runner.go:130] > Linkmode:       dynamic
	I0617 11:34:05.596037  148753 command_runner.go:130] > BuildTags:      
	I0617 11:34:05.596046  148753 command_runner.go:130] >   containers_image_ostree_stub
	I0617 11:34:05.596053  148753 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0617 11:34:05.596057  148753 command_runner.go:130] >   btrfs_noversion
	I0617 11:34:05.596062  148753 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0617 11:34:05.596066  148753 command_runner.go:130] >   libdm_no_deferred_remove
	I0617 11:34:05.596070  148753 command_runner.go:130] >   seccomp
	I0617 11:34:05.596074  148753 command_runner.go:130] > LDFlags:          unknown
	I0617 11:34:05.596080  148753 command_runner.go:130] > SeccompEnabled:   true
	I0617 11:34:05.596085  148753 command_runner.go:130] > AppArmorEnabled:  false
	I0617 11:34:05.597093  148753 ssh_runner.go:195] Run: crio --version
	I0617 11:34:05.630174  148753 command_runner.go:130] > crio version 1.29.1
	I0617 11:34:05.630196  148753 command_runner.go:130] > Version:        1.29.1
	I0617 11:34:05.630204  148753 command_runner.go:130] > GitCommit:      unknown
	I0617 11:34:05.630209  148753 command_runner.go:130] > GitCommitDate:  unknown
	I0617 11:34:05.630215  148753 command_runner.go:130] > GitTreeState:   clean
	I0617 11:34:05.630224  148753 command_runner.go:130] > BuildDate:      2024-06-11T00:56:20Z
	I0617 11:34:05.630231  148753 command_runner.go:130] > GoVersion:      go1.21.6
	I0617 11:34:05.630239  148753 command_runner.go:130] > Compiler:       gc
	I0617 11:34:05.630247  148753 command_runner.go:130] > Platform:       linux/amd64
	I0617 11:34:05.630254  148753 command_runner.go:130] > Linkmode:       dynamic
	I0617 11:34:05.630262  148753 command_runner.go:130] > BuildTags:      
	I0617 11:34:05.630273  148753 command_runner.go:130] >   containers_image_ostree_stub
	I0617 11:34:05.630281  148753 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0617 11:34:05.630288  148753 command_runner.go:130] >   btrfs_noversion
	I0617 11:34:05.630296  148753 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0617 11:34:05.630307  148753 command_runner.go:130] >   libdm_no_deferred_remove
	I0617 11:34:05.630314  148753 command_runner.go:130] >   seccomp
	I0617 11:34:05.630321  148753 command_runner.go:130] > LDFlags:          unknown
	I0617 11:34:05.630328  148753 command_runner.go:130] > SeccompEnabled:   true
	I0617 11:34:05.630336  148753 command_runner.go:130] > AppArmorEnabled:  false
	I0617 11:34:05.633489  148753 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 11:34:05.634834  148753 main.go:141] libmachine: (multinode-353869) Calling .GetIP
	I0617 11:34:05.637476  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:34:05.637783  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:34:05.637802  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:34:05.638041  148753 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 11:34:05.642196  148753 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0617 11:34:05.642337  148753 kubeadm.go:877] updating cluster {Name:multinode-353869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-353869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.46 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 11:34:05.642486  148753 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:34:05.642532  148753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:34:05.693554  148753 command_runner.go:130] > {
	I0617 11:34:05.693584  148753 command_runner.go:130] >   "images": [
	I0617 11:34:05.693591  148753 command_runner.go:130] >     {
	I0617 11:34:05.693600  148753 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0617 11:34:05.693605  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.693611  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0617 11:34:05.693615  148753 command_runner.go:130] >       ],
	I0617 11:34:05.693619  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.693626  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0617 11:34:05.693633  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0617 11:34:05.693637  148753 command_runner.go:130] >       ],
	I0617 11:34:05.693642  148753 command_runner.go:130] >       "size": "65291810",
	I0617 11:34:05.693652  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.693660  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.693675  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.693682  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.693690  148753 command_runner.go:130] >     },
	I0617 11:34:05.693693  148753 command_runner.go:130] >     {
	I0617 11:34:05.693700  148753 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0617 11:34:05.693704  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.693710  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0617 11:34:05.693714  148753 command_runner.go:130] >       ],
	I0617 11:34:05.693718  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.693726  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0617 11:34:05.693740  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0617 11:34:05.693750  148753 command_runner.go:130] >       ],
	I0617 11:34:05.693757  148753 command_runner.go:130] >       "size": "65908273",
	I0617 11:34:05.693762  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.693776  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.693786  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.693792  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.693799  148753 command_runner.go:130] >     },
	I0617 11:34:05.693803  148753 command_runner.go:130] >     {
	I0617 11:34:05.693811  148753 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0617 11:34:05.693817  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.693829  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0617 11:34:05.693841  148753 command_runner.go:130] >       ],
	I0617 11:34:05.693850  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.693864  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0617 11:34:05.693879  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0617 11:34:05.693887  148753 command_runner.go:130] >       ],
	I0617 11:34:05.693893  148753 command_runner.go:130] >       "size": "1363676",
	I0617 11:34:05.693899  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.693907  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.693916  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.693926  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.693932  148753 command_runner.go:130] >     },
	I0617 11:34:05.693941  148753 command_runner.go:130] >     {
	I0617 11:34:05.693954  148753 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0617 11:34:05.693963  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.693973  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0617 11:34:05.693980  148753 command_runner.go:130] >       ],
	I0617 11:34:05.693985  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.694002  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0617 11:34:05.694029  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0617 11:34:05.694039  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694046  148753 command_runner.go:130] >       "size": "31470524",
	I0617 11:34:05.694060  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.694067  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.694072  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.694083  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.694091  148753 command_runner.go:130] >     },
	I0617 11:34:05.694098  148753 command_runner.go:130] >     {
	I0617 11:34:05.694111  148753 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0617 11:34:05.694120  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.694132  148753 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0617 11:34:05.694141  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694148  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.694156  148753 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0617 11:34:05.694171  148753 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0617 11:34:05.694181  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694191  148753 command_runner.go:130] >       "size": "61245718",
	I0617 11:34:05.694199  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.694209  148753 command_runner.go:130] >       "username": "nonroot",
	I0617 11:34:05.694218  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.694227  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.694234  148753 command_runner.go:130] >     },
	I0617 11:34:05.694237  148753 command_runner.go:130] >     {
	I0617 11:34:05.694250  148753 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0617 11:34:05.694260  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.694271  148753 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0617 11:34:05.694280  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694289  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.694303  148753 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0617 11:34:05.694316  148753 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0617 11:34:05.694322  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694328  148753 command_runner.go:130] >       "size": "150779692",
	I0617 11:34:05.694337  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.694348  148753 command_runner.go:130] >         "value": "0"
	I0617 11:34:05.694358  148753 command_runner.go:130] >       },
	I0617 11:34:05.694367  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.694377  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.694386  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.694394  148753 command_runner.go:130] >     },
	I0617 11:34:05.694400  148753 command_runner.go:130] >     {
	I0617 11:34:05.694409  148753 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0617 11:34:05.694418  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.694431  148753 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0617 11:34:05.694439  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694446  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.694461  148753 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0617 11:34:05.694476  148753 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0617 11:34:05.694485  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694490  148753 command_runner.go:130] >       "size": "117601759",
	I0617 11:34:05.694494  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.694499  148753 command_runner.go:130] >         "value": "0"
	I0617 11:34:05.694508  148753 command_runner.go:130] >       },
	I0617 11:34:05.694518  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.694527  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.694536  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.694546  148753 command_runner.go:130] >     },
	I0617 11:34:05.694554  148753 command_runner.go:130] >     {
	I0617 11:34:05.694564  148753 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0617 11:34:05.694572  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.694580  148753 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0617 11:34:05.694585  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694593  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.694618  148753 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0617 11:34:05.694633  148753 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0617 11:34:05.694641  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694648  148753 command_runner.go:130] >       "size": "112170310",
	I0617 11:34:05.694656  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.694667  148753 command_runner.go:130] >         "value": "0"
	I0617 11:34:05.694676  148753 command_runner.go:130] >       },
	I0617 11:34:05.694684  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.694691  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.694698  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.694702  148753 command_runner.go:130] >     },
	I0617 11:34:05.694708  148753 command_runner.go:130] >     {
	I0617 11:34:05.694718  148753 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0617 11:34:05.694725  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.694733  148753 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0617 11:34:05.694742  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694748  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.694765  148753 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0617 11:34:05.694780  148753 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0617 11:34:05.694789  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694796  148753 command_runner.go:130] >       "size": "85933465",
	I0617 11:34:05.694805  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.694814  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.694824  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.694834  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.694845  148753 command_runner.go:130] >     },
	I0617 11:34:05.694850  148753 command_runner.go:130] >     {
	I0617 11:34:05.694860  148753 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0617 11:34:05.694867  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.694875  148753 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0617 11:34:05.694881  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694887  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.694902  148753 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0617 11:34:05.694917  148753 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0617 11:34:05.694928  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694936  148753 command_runner.go:130] >       "size": "63026504",
	I0617 11:34:05.694945  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.694952  148753 command_runner.go:130] >         "value": "0"
	I0617 11:34:05.694961  148753 command_runner.go:130] >       },
	I0617 11:34:05.694968  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.694975  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.694981  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.694990  148753 command_runner.go:130] >     },
	I0617 11:34:05.694996  148753 command_runner.go:130] >     {
	I0617 11:34:05.695006  148753 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0617 11:34:05.695020  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.695031  148753 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0617 11:34:05.695038  148753 command_runner.go:130] >       ],
	I0617 11:34:05.695049  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.695063  148753 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0617 11:34:05.695077  148753 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0617 11:34:05.695086  148753 command_runner.go:130] >       ],
	I0617 11:34:05.695095  148753 command_runner.go:130] >       "size": "750414",
	I0617 11:34:05.695103  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.695107  148753 command_runner.go:130] >         "value": "65535"
	I0617 11:34:05.695114  148753 command_runner.go:130] >       },
	I0617 11:34:05.695125  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.695135  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.695141  148753 command_runner.go:130] >       "pinned": true
	I0617 11:34:05.695150  148753 command_runner.go:130] >     }
	I0617 11:34:05.695158  148753 command_runner.go:130] >   ]
	I0617 11:34:05.695163  148753 command_runner.go:130] > }
	I0617 11:34:05.695371  148753 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 11:34:05.695384  148753 crio.go:433] Images already preloaded, skipping extraction
	I0617 11:34:05.695436  148753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:34:05.727339  148753 command_runner.go:130] > {
	I0617 11:34:05.727358  148753 command_runner.go:130] >   "images": [
	I0617 11:34:05.727362  148753 command_runner.go:130] >     {
	I0617 11:34:05.727372  148753 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0617 11:34:05.727377  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.727382  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0617 11:34:05.727386  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727390  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.727399  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0617 11:34:05.727406  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0617 11:34:05.727410  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727416  148753 command_runner.go:130] >       "size": "65291810",
	I0617 11:34:05.727429  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.727436  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.727441  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.727445  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.727448  148753 command_runner.go:130] >     },
	I0617 11:34:05.727452  148753 command_runner.go:130] >     {
	I0617 11:34:05.727476  148753 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0617 11:34:05.727483  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.727495  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0617 11:34:05.727501  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727511  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.727521  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0617 11:34:05.727529  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0617 11:34:05.727533  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727539  148753 command_runner.go:130] >       "size": "65908273",
	I0617 11:34:05.727542  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.727549  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.727553  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.727557  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.727560  148753 command_runner.go:130] >     },
	I0617 11:34:05.727565  148753 command_runner.go:130] >     {
	I0617 11:34:05.727575  148753 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0617 11:34:05.727585  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.727592  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0617 11:34:05.727602  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727610  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.727621  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0617 11:34:05.727632  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0617 11:34:05.727638  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727643  148753 command_runner.go:130] >       "size": "1363676",
	I0617 11:34:05.727649  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.727653  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.727660  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.727664  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.727667  148753 command_runner.go:130] >     },
	I0617 11:34:05.727673  148753 command_runner.go:130] >     {
	I0617 11:34:05.727679  148753 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0617 11:34:05.727685  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.727690  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0617 11:34:05.727696  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727703  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.727713  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0617 11:34:05.727727  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0617 11:34:05.727733  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727737  148753 command_runner.go:130] >       "size": "31470524",
	I0617 11:34:05.727743  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.727748  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.727754  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.727758  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.727764  148753 command_runner.go:130] >     },
	I0617 11:34:05.727772  148753 command_runner.go:130] >     {
	I0617 11:34:05.727781  148753 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0617 11:34:05.727785  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.727790  148753 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0617 11:34:05.727796  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727800  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.727810  148753 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0617 11:34:05.727819  148753 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0617 11:34:05.727825  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727829  148753 command_runner.go:130] >       "size": "61245718",
	I0617 11:34:05.727835  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.727840  148753 command_runner.go:130] >       "username": "nonroot",
	I0617 11:34:05.727846  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.727857  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.727862  148753 command_runner.go:130] >     },
	I0617 11:34:05.727865  148753 command_runner.go:130] >     {
	I0617 11:34:05.727872  148753 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0617 11:34:05.727878  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.727882  148753 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0617 11:34:05.727888  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727892  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.727901  148753 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0617 11:34:05.727910  148753 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0617 11:34:05.727916  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727920  148753 command_runner.go:130] >       "size": "150779692",
	I0617 11:34:05.727926  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.727930  148753 command_runner.go:130] >         "value": "0"
	I0617 11:34:05.727936  148753 command_runner.go:130] >       },
	I0617 11:34:05.727940  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.727946  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.727951  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.727957  148753 command_runner.go:130] >     },
	I0617 11:34:05.727960  148753 command_runner.go:130] >     {
	I0617 11:34:05.727966  148753 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0617 11:34:05.727972  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.727977  148753 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0617 11:34:05.727983  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727987  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.727996  148753 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0617 11:34:05.728005  148753 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0617 11:34:05.728011  148753 command_runner.go:130] >       ],
	I0617 11:34:05.728016  148753 command_runner.go:130] >       "size": "117601759",
	I0617 11:34:05.728021  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.728025  148753 command_runner.go:130] >         "value": "0"
	I0617 11:34:05.728031  148753 command_runner.go:130] >       },
	I0617 11:34:05.728035  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.728041  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.728045  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.728051  148753 command_runner.go:130] >     },
	I0617 11:34:05.728055  148753 command_runner.go:130] >     {
	I0617 11:34:05.728063  148753 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0617 11:34:05.728069  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.728075  148753 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0617 11:34:05.728081  148753 command_runner.go:130] >       ],
	I0617 11:34:05.728085  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.728100  148753 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0617 11:34:05.728110  148753 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0617 11:34:05.728116  148753 command_runner.go:130] >       ],
	I0617 11:34:05.728120  148753 command_runner.go:130] >       "size": "112170310",
	I0617 11:34:05.728126  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.728130  148753 command_runner.go:130] >         "value": "0"
	I0617 11:34:05.728133  148753 command_runner.go:130] >       },
	I0617 11:34:05.728139  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.728144  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.728150  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.728153  148753 command_runner.go:130] >     },
	I0617 11:34:05.728159  148753 command_runner.go:130] >     {
	I0617 11:34:05.728164  148753 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0617 11:34:05.728170  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.728175  148753 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0617 11:34:05.728181  148753 command_runner.go:130] >       ],
	I0617 11:34:05.728185  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.728194  148753 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0617 11:34:05.728204  148753 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0617 11:34:05.728209  148753 command_runner.go:130] >       ],
	I0617 11:34:05.728213  148753 command_runner.go:130] >       "size": "85933465",
	I0617 11:34:05.728219  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.728223  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.728230  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.728233  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.728239  148753 command_runner.go:130] >     },
	I0617 11:34:05.728242  148753 command_runner.go:130] >     {
	I0617 11:34:05.728250  148753 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0617 11:34:05.728256  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.728261  148753 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0617 11:34:05.728268  148753 command_runner.go:130] >       ],
	I0617 11:34:05.728272  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.728280  148753 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0617 11:34:05.728289  148753 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0617 11:34:05.728295  148753 command_runner.go:130] >       ],
	I0617 11:34:05.728299  148753 command_runner.go:130] >       "size": "63026504",
	I0617 11:34:05.728306  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.728309  148753 command_runner.go:130] >         "value": "0"
	I0617 11:34:05.728315  148753 command_runner.go:130] >       },
	I0617 11:34:05.728318  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.728325  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.728330  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.728336  148753 command_runner.go:130] >     },
	I0617 11:34:05.728339  148753 command_runner.go:130] >     {
	I0617 11:34:05.728350  148753 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0617 11:34:05.728356  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.728360  148753 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0617 11:34:05.728366  148753 command_runner.go:130] >       ],
	I0617 11:34:05.728370  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.728379  148753 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0617 11:34:05.728388  148753 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0617 11:34:05.728394  148753 command_runner.go:130] >       ],
	I0617 11:34:05.728398  148753 command_runner.go:130] >       "size": "750414",
	I0617 11:34:05.728402  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.728408  148753 command_runner.go:130] >         "value": "65535"
	I0617 11:34:05.728412  148753 command_runner.go:130] >       },
	I0617 11:34:05.728418  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.728422  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.728428  148753 command_runner.go:130] >       "pinned": true
	I0617 11:34:05.728431  148753 command_runner.go:130] >     }
	I0617 11:34:05.728436  148753 command_runner.go:130] >   ]
	I0617 11:34:05.728439  148753 command_runner.go:130] > }
	I0617 11:34:05.728569  148753 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 11:34:05.728583  148753 cache_images.go:84] Images are preloaded, skipping loading
	I0617 11:34:05.728590  148753 kubeadm.go:928] updating node { 192.168.39.17 8443 v1.30.1 crio true true} ...
	I0617 11:34:05.728696  148753 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-353869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-353869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 11:34:05.728761  148753 ssh_runner.go:195] Run: crio config
	I0617 11:34:05.760592  148753 command_runner.go:130] ! time="2024-06-17 11:34:05.740729828Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0617 11:34:05.766338  148753 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0617 11:34:05.773866  148753 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0617 11:34:05.773890  148753 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0617 11:34:05.773901  148753 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0617 11:34:05.773904  148753 command_runner.go:130] > #
	I0617 11:34:05.773910  148753 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0617 11:34:05.773919  148753 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0617 11:34:05.773925  148753 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0617 11:34:05.773932  148753 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0617 11:34:05.773937  148753 command_runner.go:130] > # reload'.
	I0617 11:34:05.773944  148753 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0617 11:34:05.773950  148753 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0617 11:34:05.773958  148753 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0617 11:34:05.773966  148753 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0617 11:34:05.773970  148753 command_runner.go:130] > [crio]
	I0617 11:34:05.773976  148753 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0617 11:34:05.773983  148753 command_runner.go:130] > # containers images, in this directory.
	I0617 11:34:05.773987  148753 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0617 11:34:05.773995  148753 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0617 11:34:05.774003  148753 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0617 11:34:05.774010  148753 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0617 11:34:05.774018  148753 command_runner.go:130] > # imagestore = ""
	I0617 11:34:05.774025  148753 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0617 11:34:05.774034  148753 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0617 11:34:05.774040  148753 command_runner.go:130] > storage_driver = "overlay"
	I0617 11:34:05.774048  148753 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0617 11:34:05.774054  148753 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0617 11:34:05.774061  148753 command_runner.go:130] > storage_option = [
	I0617 11:34:05.774065  148753 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0617 11:34:05.774071  148753 command_runner.go:130] > ]
	I0617 11:34:05.774077  148753 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0617 11:34:05.774085  148753 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0617 11:34:05.774092  148753 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0617 11:34:05.774098  148753 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0617 11:34:05.774106  148753 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0617 11:34:05.774113  148753 command_runner.go:130] > # always happen on a node reboot
	I0617 11:34:05.774117  148753 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0617 11:34:05.774127  148753 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0617 11:34:05.774135  148753 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0617 11:34:05.774140  148753 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0617 11:34:05.774147  148753 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0617 11:34:05.774154  148753 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0617 11:34:05.774164  148753 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0617 11:34:05.774170  148753 command_runner.go:130] > # internal_wipe = true
	I0617 11:34:05.774178  148753 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0617 11:34:05.774185  148753 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0617 11:34:05.774191  148753 command_runner.go:130] > # internal_repair = false
	I0617 11:34:05.774196  148753 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0617 11:34:05.774206  148753 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0617 11:34:05.774214  148753 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0617 11:34:05.774222  148753 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0617 11:34:05.774232  148753 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0617 11:34:05.774239  148753 command_runner.go:130] > [crio.api]
	I0617 11:34:05.774244  148753 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0617 11:34:05.774251  148753 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0617 11:34:05.774256  148753 command_runner.go:130] > # IP address on which the stream server will listen.
	I0617 11:34:05.774264  148753 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0617 11:34:05.774273  148753 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0617 11:34:05.774281  148753 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0617 11:34:05.774285  148753 command_runner.go:130] > # stream_port = "0"
	I0617 11:34:05.774293  148753 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0617 11:34:05.774297  148753 command_runner.go:130] > # stream_enable_tls = false
	I0617 11:34:05.774305  148753 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0617 11:34:05.774312  148753 command_runner.go:130] > # stream_idle_timeout = ""
	I0617 11:34:05.774318  148753 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0617 11:34:05.774327  148753 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0617 11:34:05.774332  148753 command_runner.go:130] > # minutes.
	I0617 11:34:05.774338  148753 command_runner.go:130] > # stream_tls_cert = ""
	I0617 11:34:05.774344  148753 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0617 11:34:05.774352  148753 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0617 11:34:05.774358  148753 command_runner.go:130] > # stream_tls_key = ""
	I0617 11:34:05.774365  148753 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0617 11:34:05.774373  148753 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0617 11:34:05.774388  148753 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0617 11:34:05.774394  148753 command_runner.go:130] > # stream_tls_ca = ""
	I0617 11:34:05.774402  148753 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0617 11:34:05.774409  148753 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0617 11:34:05.774416  148753 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0617 11:34:05.774423  148753 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0617 11:34:05.774429  148753 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0617 11:34:05.774436  148753 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0617 11:34:05.774442  148753 command_runner.go:130] > [crio.runtime]
	I0617 11:34:05.774449  148753 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0617 11:34:05.774456  148753 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0617 11:34:05.774484  148753 command_runner.go:130] > # "nofile=1024:2048"
	I0617 11:34:05.774497  148753 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0617 11:34:05.774501  148753 command_runner.go:130] > # default_ulimits = [
	I0617 11:34:05.774507  148753 command_runner.go:130] > # ]
	I0617 11:34:05.774513  148753 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0617 11:34:05.774518  148753 command_runner.go:130] > # no_pivot = false
	I0617 11:34:05.774524  148753 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0617 11:34:05.774532  148753 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0617 11:34:05.774540  148753 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0617 11:34:05.774545  148753 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0617 11:34:05.774552  148753 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0617 11:34:05.774559  148753 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0617 11:34:05.774566  148753 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0617 11:34:05.774571  148753 command_runner.go:130] > # Cgroup setting for conmon
	I0617 11:34:05.774579  148753 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0617 11:34:05.774586  148753 command_runner.go:130] > conmon_cgroup = "pod"
	I0617 11:34:05.774592  148753 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0617 11:34:05.774599  148753 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0617 11:34:05.774606  148753 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0617 11:34:05.774611  148753 command_runner.go:130] > conmon_env = [
	I0617 11:34:05.774616  148753 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0617 11:34:05.774622  148753 command_runner.go:130] > ]
	I0617 11:34:05.774627  148753 command_runner.go:130] > # Additional environment variables to set for all the
	I0617 11:34:05.774634  148753 command_runner.go:130] > # containers. These are overridden if set in the
	I0617 11:34:05.774640  148753 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0617 11:34:05.774647  148753 command_runner.go:130] > # default_env = [
	I0617 11:34:05.774650  148753 command_runner.go:130] > # ]
	I0617 11:34:05.774657  148753 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0617 11:34:05.774667  148753 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0617 11:34:05.774673  148753 command_runner.go:130] > # selinux = false
	I0617 11:34:05.774679  148753 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0617 11:34:05.774687  148753 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0617 11:34:05.774693  148753 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0617 11:34:05.774699  148753 command_runner.go:130] > # seccomp_profile = ""
	I0617 11:34:05.774705  148753 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0617 11:34:05.774711  148753 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0617 11:34:05.774720  148753 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0617 11:34:05.774726  148753 command_runner.go:130] > # which might increase security.
	I0617 11:34:05.774730  148753 command_runner.go:130] > # This option is currently deprecated,
	I0617 11:34:05.774738  148753 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0617 11:34:05.774745  148753 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0617 11:34:05.774751  148753 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0617 11:34:05.774760  148753 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0617 11:34:05.774775  148753 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0617 11:34:05.774783  148753 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0617 11:34:05.774787  148753 command_runner.go:130] > # This option supports live configuration reload.
	I0617 11:34:05.774794  148753 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0617 11:34:05.774799  148753 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0617 11:34:05.774806  148753 command_runner.go:130] > # the cgroup blockio controller.
	I0617 11:34:05.774810  148753 command_runner.go:130] > # blockio_config_file = ""
	I0617 11:34:05.774819  148753 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0617 11:34:05.774825  148753 command_runner.go:130] > # blockio parameters.
	I0617 11:34:05.774829  148753 command_runner.go:130] > # blockio_reload = false
	I0617 11:34:05.774838  148753 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0617 11:34:05.774843  148753 command_runner.go:130] > # irqbalance daemon.
	I0617 11:34:05.774848  148753 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0617 11:34:05.774856  148753 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0617 11:34:05.774865  148753 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0617 11:34:05.774874  148753 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0617 11:34:05.774881  148753 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0617 11:34:05.774887  148753 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0617 11:34:05.774894  148753 command_runner.go:130] > # This option supports live configuration reload.
	I0617 11:34:05.774898  148753 command_runner.go:130] > # rdt_config_file = ""
	I0617 11:34:05.774904  148753 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0617 11:34:05.774910  148753 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0617 11:34:05.774928  148753 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0617 11:34:05.774935  148753 command_runner.go:130] > # separate_pull_cgroup = ""
	I0617 11:34:05.774941  148753 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0617 11:34:05.774950  148753 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0617 11:34:05.774956  148753 command_runner.go:130] > # will be added.
	I0617 11:34:05.774961  148753 command_runner.go:130] > # default_capabilities = [
	I0617 11:34:05.774967  148753 command_runner.go:130] > # 	"CHOWN",
	I0617 11:34:05.774971  148753 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0617 11:34:05.774977  148753 command_runner.go:130] > # 	"FSETID",
	I0617 11:34:05.774981  148753 command_runner.go:130] > # 	"FOWNER",
	I0617 11:34:05.774987  148753 command_runner.go:130] > # 	"SETGID",
	I0617 11:34:05.774990  148753 command_runner.go:130] > # 	"SETUID",
	I0617 11:34:05.774996  148753 command_runner.go:130] > # 	"SETPCAP",
	I0617 11:34:05.775000  148753 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0617 11:34:05.775006  148753 command_runner.go:130] > # 	"KILL",
	I0617 11:34:05.775009  148753 command_runner.go:130] > # ]
	I0617 11:34:05.775016  148753 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0617 11:34:05.775025  148753 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0617 11:34:05.775032  148753 command_runner.go:130] > # add_inheritable_capabilities = false
	I0617 11:34:05.775039  148753 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0617 11:34:05.775047  148753 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0617 11:34:05.775054  148753 command_runner.go:130] > default_sysctls = [
	I0617 11:34:05.775058  148753 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0617 11:34:05.775064  148753 command_runner.go:130] > ]
	I0617 11:34:05.775069  148753 command_runner.go:130] > # List of devices on the host that a
	I0617 11:34:05.775077  148753 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0617 11:34:05.775082  148753 command_runner.go:130] > # allowed_devices = [
	I0617 11:34:05.775085  148753 command_runner.go:130] > # 	"/dev/fuse",
	I0617 11:34:05.775091  148753 command_runner.go:130] > # ]
	I0617 11:34:05.775096  148753 command_runner.go:130] > # List of additional devices. specified as
	I0617 11:34:05.775105  148753 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0617 11:34:05.775113  148753 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0617 11:34:05.775121  148753 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0617 11:34:05.775125  148753 command_runner.go:130] > # additional_devices = [
	I0617 11:34:05.775130  148753 command_runner.go:130] > # ]
	I0617 11:34:05.775135  148753 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0617 11:34:05.775141  148753 command_runner.go:130] > # cdi_spec_dirs = [
	I0617 11:34:05.775145  148753 command_runner.go:130] > # 	"/etc/cdi",
	I0617 11:34:05.775151  148753 command_runner.go:130] > # 	"/var/run/cdi",
	I0617 11:34:05.775154  148753 command_runner.go:130] > # ]
	I0617 11:34:05.775161  148753 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0617 11:34:05.775169  148753 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0617 11:34:05.775173  148753 command_runner.go:130] > # Defaults to false.
	I0617 11:34:05.775178  148753 command_runner.go:130] > # device_ownership_from_security_context = false
	I0617 11:34:05.775187  148753 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0617 11:34:05.775195  148753 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0617 11:34:05.775199  148753 command_runner.go:130] > # hooks_dir = [
	I0617 11:34:05.775204  148753 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0617 11:34:05.775209  148753 command_runner.go:130] > # ]
	I0617 11:34:05.775215  148753 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0617 11:34:05.775223  148753 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0617 11:34:05.775228  148753 command_runner.go:130] > # its default mounts from the following two files:
	I0617 11:34:05.775233  148753 command_runner.go:130] > #
	I0617 11:34:05.775239  148753 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0617 11:34:05.775247  148753 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0617 11:34:05.775253  148753 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0617 11:34:05.775258  148753 command_runner.go:130] > #
	I0617 11:34:05.775264  148753 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0617 11:34:05.775272  148753 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0617 11:34:05.775280  148753 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0617 11:34:05.775287  148753 command_runner.go:130] > #      only add mounts it finds in this file.
	I0617 11:34:05.775291  148753 command_runner.go:130] > #
	I0617 11:34:05.775295  148753 command_runner.go:130] > # default_mounts_file = ""
	I0617 11:34:05.775302  148753 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0617 11:34:05.775308  148753 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0617 11:34:05.775314  148753 command_runner.go:130] > pids_limit = 1024
	I0617 11:34:05.775320  148753 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0617 11:34:05.775328  148753 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0617 11:34:05.775337  148753 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0617 11:34:05.775347  148753 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0617 11:34:05.775353  148753 command_runner.go:130] > # log_size_max = -1
	I0617 11:34:05.775359  148753 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0617 11:34:05.775366  148753 command_runner.go:130] > # log_to_journald = false
	I0617 11:34:05.775372  148753 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0617 11:34:05.775379  148753 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0617 11:34:05.775384  148753 command_runner.go:130] > # Path to directory for container attach sockets.
	I0617 11:34:05.775391  148753 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0617 11:34:05.775396  148753 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0617 11:34:05.775402  148753 command_runner.go:130] > # bind_mount_prefix = ""
	I0617 11:34:05.775410  148753 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0617 11:34:05.775416  148753 command_runner.go:130] > # read_only = false
	I0617 11:34:05.775422  148753 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0617 11:34:05.775430  148753 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0617 11:34:05.775434  148753 command_runner.go:130] > # live configuration reload.
	I0617 11:34:05.775438  148753 command_runner.go:130] > # log_level = "info"
	I0617 11:34:05.775444  148753 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0617 11:34:05.775471  148753 command_runner.go:130] > # This option supports live configuration reload.
	I0617 11:34:05.775481  148753 command_runner.go:130] > # log_filter = ""
	I0617 11:34:05.775487  148753 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0617 11:34:05.775496  148753 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0617 11:34:05.775502  148753 command_runner.go:130] > # separated by comma.
	I0617 11:34:05.775509  148753 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0617 11:34:05.775515  148753 command_runner.go:130] > # uid_mappings = ""
	I0617 11:34:05.775521  148753 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0617 11:34:05.775528  148753 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0617 11:34:05.775535  148753 command_runner.go:130] > # separated by comma.
	I0617 11:34:05.775543  148753 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0617 11:34:05.775549  148753 command_runner.go:130] > # gid_mappings = ""
	I0617 11:34:05.775555  148753 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0617 11:34:05.775563  148753 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0617 11:34:05.775569  148753 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0617 11:34:05.775579  148753 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0617 11:34:05.775584  148753 command_runner.go:130] > # minimum_mappable_uid = -1
	I0617 11:34:05.775590  148753 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0617 11:34:05.775598  148753 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0617 11:34:05.775606  148753 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0617 11:34:05.775613  148753 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0617 11:34:05.775628  148753 command_runner.go:130] > # minimum_mappable_gid = -1
	I0617 11:34:05.775634  148753 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0617 11:34:05.775641  148753 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0617 11:34:05.775649  148753 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0617 11:34:05.775655  148753 command_runner.go:130] > # ctr_stop_timeout = 30
	I0617 11:34:05.775661  148753 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0617 11:34:05.775669  148753 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0617 11:34:05.775673  148753 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0617 11:34:05.775681  148753 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0617 11:34:05.775687  148753 command_runner.go:130] > drop_infra_ctr = false
	I0617 11:34:05.775693  148753 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0617 11:34:05.775701  148753 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0617 11:34:05.775710  148753 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0617 11:34:05.775716  148753 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0617 11:34:05.775723  148753 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0617 11:34:05.775730  148753 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0617 11:34:05.775736  148753 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0617 11:34:05.775743  148753 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0617 11:34:05.775747  148753 command_runner.go:130] > # shared_cpuset = ""
	I0617 11:34:05.775755  148753 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0617 11:34:05.775762  148753 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0617 11:34:05.775770  148753 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0617 11:34:05.775778  148753 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0617 11:34:05.775783  148753 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0617 11:34:05.775790  148753 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0617 11:34:05.775796  148753 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0617 11:34:05.775802  148753 command_runner.go:130] > # enable_criu_support = false
	I0617 11:34:05.775807  148753 command_runner.go:130] > # Enable/disable the generation of the container,
	I0617 11:34:05.775815  148753 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0617 11:34:05.775819  148753 command_runner.go:130] > # enable_pod_events = false
	I0617 11:34:05.775827  148753 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0617 11:34:05.775835  148753 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0617 11:34:05.775842  148753 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0617 11:34:05.775848  148753 command_runner.go:130] > # default_runtime = "runc"
	I0617 11:34:05.775853  148753 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0617 11:34:05.775862  148753 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0617 11:34:05.775873  148753 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0617 11:34:05.775880  148753 command_runner.go:130] > # creation as a file is not desired either.
	I0617 11:34:05.775888  148753 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0617 11:34:05.775896  148753 command_runner.go:130] > # the hostname is being managed dynamically.
	I0617 11:34:05.775902  148753 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0617 11:34:05.775905  148753 command_runner.go:130] > # ]
	I0617 11:34:05.775911  148753 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0617 11:34:05.775919  148753 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0617 11:34:05.775927  148753 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0617 11:34:05.775934  148753 command_runner.go:130] > # Each entry in the table should follow the format:
	I0617 11:34:05.775937  148753 command_runner.go:130] > #
	I0617 11:34:05.775942  148753 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0617 11:34:05.775949  148753 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0617 11:34:05.775969  148753 command_runner.go:130] > # runtime_type = "oci"
	I0617 11:34:05.775976  148753 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0617 11:34:05.775981  148753 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0617 11:34:05.775987  148753 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0617 11:34:05.775992  148753 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0617 11:34:05.775998  148753 command_runner.go:130] > # monitor_env = []
	I0617 11:34:05.776003  148753 command_runner.go:130] > # privileged_without_host_devices = false
	I0617 11:34:05.776009  148753 command_runner.go:130] > # allowed_annotations = []
	I0617 11:34:05.776014  148753 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0617 11:34:05.776020  148753 command_runner.go:130] > # Where:
	I0617 11:34:05.776025  148753 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0617 11:34:05.776033  148753 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0617 11:34:05.776042  148753 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0617 11:34:05.776050  148753 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0617 11:34:05.776056  148753 command_runner.go:130] > #   in $PATH.
	I0617 11:34:05.776062  148753 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0617 11:34:05.776070  148753 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0617 11:34:05.776078  148753 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0617 11:34:05.776084  148753 command_runner.go:130] > #   state.
	I0617 11:34:05.776090  148753 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0617 11:34:05.776099  148753 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0617 11:34:05.776108  148753 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0617 11:34:05.776115  148753 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0617 11:34:05.776123  148753 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0617 11:34:05.776129  148753 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0617 11:34:05.776137  148753 command_runner.go:130] > #   The currently recognized values are:
	I0617 11:34:05.776143  148753 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0617 11:34:05.776152  148753 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0617 11:34:05.776160  148753 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0617 11:34:05.776169  148753 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0617 11:34:05.776178  148753 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0617 11:34:05.776187  148753 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0617 11:34:05.776196  148753 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0617 11:34:05.776201  148753 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0617 11:34:05.776209  148753 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0617 11:34:05.776217  148753 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0617 11:34:05.776223  148753 command_runner.go:130] > #   deprecated option "conmon".
	I0617 11:34:05.776230  148753 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0617 11:34:05.776237  148753 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0617 11:34:05.776243  148753 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0617 11:34:05.776250  148753 command_runner.go:130] > #   should be moved to the container's cgroup
	I0617 11:34:05.776256  148753 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0617 11:34:05.776264  148753 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0617 11:34:05.776270  148753 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0617 11:34:05.776277  148753 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0617 11:34:05.776280  148753 command_runner.go:130] > #
	I0617 11:34:05.776287  148753 command_runner.go:130] > # Using the seccomp notifier feature:
	I0617 11:34:05.776290  148753 command_runner.go:130] > #
	I0617 11:34:05.776296  148753 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0617 11:34:05.776304  148753 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0617 11:34:05.776310  148753 command_runner.go:130] > #
	I0617 11:34:05.776316  148753 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0617 11:34:05.776324  148753 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0617 11:34:05.776328  148753 command_runner.go:130] > #
	I0617 11:34:05.776334  148753 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0617 11:34:05.776339  148753 command_runner.go:130] > # feature.
	I0617 11:34:05.776342  148753 command_runner.go:130] > #
	I0617 11:34:05.776350  148753 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0617 11:34:05.776356  148753 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0617 11:34:05.776364  148753 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0617 11:34:05.776373  148753 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0617 11:34:05.776379  148753 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0617 11:34:05.776382  148753 command_runner.go:130] > #
	I0617 11:34:05.776390  148753 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0617 11:34:05.776396  148753 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0617 11:34:05.776401  148753 command_runner.go:130] > #
	I0617 11:34:05.776407  148753 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0617 11:34:05.776417  148753 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0617 11:34:05.776422  148753 command_runner.go:130] > #
	I0617 11:34:05.776428  148753 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0617 11:34:05.776436  148753 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0617 11:34:05.776440  148753 command_runner.go:130] > # limitation.
	I0617 11:34:05.776445  148753 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0617 11:34:05.776451  148753 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0617 11:34:05.776455  148753 command_runner.go:130] > runtime_type = "oci"
	I0617 11:34:05.776460  148753 command_runner.go:130] > runtime_root = "/run/runc"
	I0617 11:34:05.776464  148753 command_runner.go:130] > runtime_config_path = ""
	I0617 11:34:05.776468  148753 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0617 11:34:05.776475  148753 command_runner.go:130] > monitor_cgroup = "pod"
	I0617 11:34:05.776479  148753 command_runner.go:130] > monitor_exec_cgroup = ""
	I0617 11:34:05.776485  148753 command_runner.go:130] > monitor_env = [
	I0617 11:34:05.776491  148753 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0617 11:34:05.776496  148753 command_runner.go:130] > ]
	I0617 11:34:05.776501  148753 command_runner.go:130] > privileged_without_host_devices = false
	I0617 11:34:05.776509  148753 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0617 11:34:05.776517  148753 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0617 11:34:05.776525  148753 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0617 11:34:05.776533  148753 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0617 11:34:05.776542  148753 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0617 11:34:05.776550  148753 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0617 11:34:05.776558  148753 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0617 11:34:05.776567  148753 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0617 11:34:05.776573  148753 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0617 11:34:05.776580  148753 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0617 11:34:05.776583  148753 command_runner.go:130] > # Example:
	I0617 11:34:05.776588  148753 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0617 11:34:05.776592  148753 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0617 11:34:05.776596  148753 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0617 11:34:05.776601  148753 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0617 11:34:05.776604  148753 command_runner.go:130] > # cpuset = 0
	I0617 11:34:05.776607  148753 command_runner.go:130] > # cpushares = "0-1"
	I0617 11:34:05.776610  148753 command_runner.go:130] > # Where:
	I0617 11:34:05.776615  148753 command_runner.go:130] > # The workload name is workload-type.
	I0617 11:34:05.776621  148753 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0617 11:34:05.776627  148753 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0617 11:34:05.776632  148753 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0617 11:34:05.776639  148753 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0617 11:34:05.776646  148753 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0617 11:34:05.776651  148753 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0617 11:34:05.776660  148753 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0617 11:34:05.776666  148753 command_runner.go:130] > # Default value is set to true
	I0617 11:34:05.776670  148753 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0617 11:34:05.776676  148753 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0617 11:34:05.776683  148753 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0617 11:34:05.776687  148753 command_runner.go:130] > # Default value is set to 'false'
	I0617 11:34:05.776695  148753 command_runner.go:130] > # disable_hostport_mapping = false
	I0617 11:34:05.776701  148753 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0617 11:34:05.776706  148753 command_runner.go:130] > #
	I0617 11:34:05.776712  148753 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0617 11:34:05.776720  148753 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0617 11:34:05.776729  148753 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0617 11:34:05.776737  148753 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0617 11:34:05.776745  148753 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0617 11:34:05.776751  148753 command_runner.go:130] > [crio.image]
	I0617 11:34:05.776756  148753 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0617 11:34:05.776762  148753 command_runner.go:130] > # default_transport = "docker://"
	I0617 11:34:05.776772  148753 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0617 11:34:05.776780  148753 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0617 11:34:05.776784  148753 command_runner.go:130] > # global_auth_file = ""
	I0617 11:34:05.776791  148753 command_runner.go:130] > # The image used to instantiate infra containers.
	I0617 11:34:05.776796  148753 command_runner.go:130] > # This option supports live configuration reload.
	I0617 11:34:05.776803  148753 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0617 11:34:05.776825  148753 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0617 11:34:05.776836  148753 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0617 11:34:05.776843  148753 command_runner.go:130] > # This option supports live configuration reload.
	I0617 11:34:05.776848  148753 command_runner.go:130] > # pause_image_auth_file = ""
	I0617 11:34:05.776856  148753 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0617 11:34:05.776864  148753 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0617 11:34:05.776872  148753 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0617 11:34:05.776882  148753 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0617 11:34:05.776886  148753 command_runner.go:130] > # pause_command = "/pause"
	I0617 11:34:05.776894  148753 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0617 11:34:05.776902  148753 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0617 11:34:05.776910  148753 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0617 11:34:05.776919  148753 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0617 11:34:05.776927  148753 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0617 11:34:05.776935  148753 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0617 11:34:05.776942  148753 command_runner.go:130] > # pinned_images = [
	I0617 11:34:05.776945  148753 command_runner.go:130] > # ]
	I0617 11:34:05.776953  148753 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0617 11:34:05.776961  148753 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0617 11:34:05.776969  148753 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0617 11:34:05.776977  148753 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0617 11:34:05.776984  148753 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0617 11:34:05.776990  148753 command_runner.go:130] > # signature_policy = ""
	I0617 11:34:05.776995  148753 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0617 11:34:05.777003  148753 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0617 11:34:05.777011  148753 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0617 11:34:05.777020  148753 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0617 11:34:05.777026  148753 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0617 11:34:05.777033  148753 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0617 11:34:05.777039  148753 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0617 11:34:05.777047  148753 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0617 11:34:05.777050  148753 command_runner.go:130] > # changing them here.
	I0617 11:34:05.777057  148753 command_runner.go:130] > # insecure_registries = [
	I0617 11:34:05.777060  148753 command_runner.go:130] > # ]
	I0617 11:34:05.777069  148753 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0617 11:34:05.777076  148753 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0617 11:34:05.777080  148753 command_runner.go:130] > # image_volumes = "mkdir"
	I0617 11:34:05.777087  148753 command_runner.go:130] > # Temporary directory to use for storing big files
	I0617 11:34:05.777092  148753 command_runner.go:130] > # big_files_temporary_dir = ""
	I0617 11:34:05.777100  148753 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0617 11:34:05.777106  148753 command_runner.go:130] > # CNI plugins.
	I0617 11:34:05.777110  148753 command_runner.go:130] > [crio.network]
	I0617 11:34:05.777119  148753 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0617 11:34:05.777127  148753 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0617 11:34:05.777133  148753 command_runner.go:130] > # cni_default_network = ""
	I0617 11:34:05.777139  148753 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0617 11:34:05.777146  148753 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0617 11:34:05.777151  148753 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0617 11:34:05.777157  148753 command_runner.go:130] > # plugin_dirs = [
	I0617 11:34:05.777161  148753 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0617 11:34:05.777166  148753 command_runner.go:130] > # ]
	I0617 11:34:05.777172  148753 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0617 11:34:05.777177  148753 command_runner.go:130] > [crio.metrics]
	I0617 11:34:05.777182  148753 command_runner.go:130] > # Globally enable or disable metrics support.
	I0617 11:34:05.777188  148753 command_runner.go:130] > enable_metrics = true
	I0617 11:34:05.777193  148753 command_runner.go:130] > # Specify enabled metrics collectors.
	I0617 11:34:05.777199  148753 command_runner.go:130] > # Per default all metrics are enabled.
	I0617 11:34:05.777205  148753 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0617 11:34:05.777213  148753 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0617 11:34:05.777222  148753 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0617 11:34:05.777228  148753 command_runner.go:130] > # metrics_collectors = [
	I0617 11:34:05.777232  148753 command_runner.go:130] > # 	"operations",
	I0617 11:34:05.777238  148753 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0617 11:34:05.777243  148753 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0617 11:34:05.777249  148753 command_runner.go:130] > # 	"operations_errors",
	I0617 11:34:05.777253  148753 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0617 11:34:05.777260  148753 command_runner.go:130] > # 	"image_pulls_by_name",
	I0617 11:34:05.777264  148753 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0617 11:34:05.777270  148753 command_runner.go:130] > # 	"image_pulls_failures",
	I0617 11:34:05.777275  148753 command_runner.go:130] > # 	"image_pulls_successes",
	I0617 11:34:05.777281  148753 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0617 11:34:05.777286  148753 command_runner.go:130] > # 	"image_layer_reuse",
	I0617 11:34:05.777292  148753 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0617 11:34:05.777296  148753 command_runner.go:130] > # 	"containers_oom_total",
	I0617 11:34:05.777303  148753 command_runner.go:130] > # 	"containers_oom",
	I0617 11:34:05.777307  148753 command_runner.go:130] > # 	"processes_defunct",
	I0617 11:34:05.777313  148753 command_runner.go:130] > # 	"operations_total",
	I0617 11:34:05.777317  148753 command_runner.go:130] > # 	"operations_latency_seconds",
	I0617 11:34:05.777324  148753 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0617 11:34:05.777329  148753 command_runner.go:130] > # 	"operations_errors_total",
	I0617 11:34:05.777336  148753 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0617 11:34:05.777340  148753 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0617 11:34:05.777347  148753 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0617 11:34:05.777352  148753 command_runner.go:130] > # 	"image_pulls_success_total",
	I0617 11:34:05.777358  148753 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0617 11:34:05.777362  148753 command_runner.go:130] > # 	"containers_oom_count_total",
	I0617 11:34:05.777369  148753 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0617 11:34:05.777373  148753 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0617 11:34:05.777377  148753 command_runner.go:130] > # ]
	I0617 11:34:05.777383  148753 command_runner.go:130] > # The port on which the metrics server will listen.
	I0617 11:34:05.777388  148753 command_runner.go:130] > # metrics_port = 9090
	I0617 11:34:05.777392  148753 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0617 11:34:05.777399  148753 command_runner.go:130] > # metrics_socket = ""
	I0617 11:34:05.777404  148753 command_runner.go:130] > # The certificate for the secure metrics server.
	I0617 11:34:05.777412  148753 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0617 11:34:05.777420  148753 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0617 11:34:05.777427  148753 command_runner.go:130] > # certificate on any modification event.
	I0617 11:34:05.777431  148753 command_runner.go:130] > # metrics_cert = ""
	I0617 11:34:05.777438  148753 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0617 11:34:05.777443  148753 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0617 11:34:05.777449  148753 command_runner.go:130] > # metrics_key = ""
	I0617 11:34:05.777454  148753 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0617 11:34:05.777460  148753 command_runner.go:130] > [crio.tracing]
	I0617 11:34:05.777465  148753 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0617 11:34:05.777469  148753 command_runner.go:130] > # enable_tracing = false
	I0617 11:34:05.777474  148753 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0617 11:34:05.777481  148753 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0617 11:34:05.777487  148753 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0617 11:34:05.777494  148753 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0617 11:34:05.777498  148753 command_runner.go:130] > # CRI-O NRI configuration.
	I0617 11:34:05.777504  148753 command_runner.go:130] > [crio.nri]
	I0617 11:34:05.777508  148753 command_runner.go:130] > # Globally enable or disable NRI.
	I0617 11:34:05.777514  148753 command_runner.go:130] > # enable_nri = false
	I0617 11:34:05.777518  148753 command_runner.go:130] > # NRI socket to listen on.
	I0617 11:34:05.777525  148753 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0617 11:34:05.777530  148753 command_runner.go:130] > # NRI plugin directory to use.
	I0617 11:34:05.777537  148753 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0617 11:34:05.777542  148753 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0617 11:34:05.777548  148753 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0617 11:34:05.777554  148753 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0617 11:34:05.777560  148753 command_runner.go:130] > # nri_disable_connections = false
	I0617 11:34:05.777565  148753 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0617 11:34:05.777572  148753 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0617 11:34:05.777577  148753 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0617 11:34:05.777584  148753 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0617 11:34:05.777589  148753 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0617 11:34:05.777595  148753 command_runner.go:130] > [crio.stats]
	I0617 11:34:05.777601  148753 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0617 11:34:05.777609  148753 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0617 11:34:05.777615  148753 command_runner.go:130] > # stats_collection_period = 0
	I0617 11:34:05.777755  148753 cni.go:84] Creating CNI manager for ""
	I0617 11:34:05.777772  148753 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0617 11:34:05.777784  148753 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 11:34:05.777810  148753 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.17 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-353869 NodeName:multinode-353869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 11:34:05.777939  148753 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-353869"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 11:34:05.778001  148753 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 11:34:05.788386  148753 command_runner.go:130] > kubeadm
	I0617 11:34:05.788403  148753 command_runner.go:130] > kubectl
	I0617 11:34:05.788408  148753 command_runner.go:130] > kubelet
	I0617 11:34:05.788765  148753 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 11:34:05.788814  148753 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 11:34:05.798341  148753 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0617 11:34:05.814559  148753 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 11:34:05.830635  148753 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0617 11:34:05.846833  148753 ssh_runner.go:195] Run: grep 192.168.39.17	control-plane.minikube.internal$ /etc/hosts
	I0617 11:34:05.850680  148753 command_runner.go:130] > 192.168.39.17	control-plane.minikube.internal
	I0617 11:34:05.850755  148753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:34:05.987496  148753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:34:06.002427  148753 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869 for IP: 192.168.39.17
	I0617 11:34:06.002448  148753 certs.go:194] generating shared ca certs ...
	I0617 11:34:06.002474  148753 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:34:06.002644  148753 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 11:34:06.002680  148753 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 11:34:06.002689  148753 certs.go:256] generating profile certs ...
	I0617 11:34:06.002765  148753 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/client.key
	I0617 11:34:06.002821  148753 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/apiserver.key.ffe5146b
	I0617 11:34:06.002853  148753 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/proxy-client.key
	I0617 11:34:06.002865  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0617 11:34:06.002876  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0617 11:34:06.002889  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0617 11:34:06.002899  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0617 11:34:06.002910  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0617 11:34:06.002923  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0617 11:34:06.002935  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0617 11:34:06.002945  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0617 11:34:06.002993  148753 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 11:34:06.003018  148753 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 11:34:06.003028  148753 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 11:34:06.003055  148753 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 11:34:06.003077  148753 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 11:34:06.003097  148753 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 11:34:06.003136  148753 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:34:06.003160  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /usr/share/ca-certificates/1201742.pem
	I0617 11:34:06.003173  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:34:06.003184  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem -> /usr/share/ca-certificates/120174.pem
	I0617 11:34:06.003801  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 11:34:06.029586  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 11:34:06.053181  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 11:34:06.077965  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 11:34:06.101778  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0617 11:34:06.125114  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0617 11:34:06.150937  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 11:34:06.174235  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 11:34:06.197748  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 11:34:06.221325  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 11:34:06.244293  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 11:34:06.267533  148753 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 11:34:06.283878  148753 ssh_runner.go:195] Run: openssl version
	I0617 11:34:06.289469  148753 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0617 11:34:06.289649  148753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 11:34:06.300994  148753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 11:34:06.305282  148753 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 11:34:06.305485  148753 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 11:34:06.305521  148753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 11:34:06.310965  148753 command_runner.go:130] > 3ec20f2e
	I0617 11:34:06.311028  148753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 11:34:06.320734  148753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 11:34:06.330902  148753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:34:06.335130  148753 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:34:06.335234  148753 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:34:06.335276  148753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:34:06.340720  148753 command_runner.go:130] > b5213941
	I0617 11:34:06.340798  148753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 11:34:06.349388  148753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 11:34:06.359297  148753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 11:34:06.363605  148753 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 11:34:06.363673  148753 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 11:34:06.363717  148753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 11:34:06.393186  148753 command_runner.go:130] > 51391683
	I0617 11:34:06.393567  148753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 11:34:06.403148  148753 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 11:34:06.407533  148753 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 11:34:06.407551  148753 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0617 11:34:06.407557  148753 command_runner.go:130] > Device: 253,1	Inode: 6292502     Links: 1
	I0617 11:34:06.407563  148753 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0617 11:34:06.407569  148753 command_runner.go:130] > Access: 2024-06-17 11:27:56.110986677 +0000
	I0617 11:34:06.407573  148753 command_runner.go:130] > Modify: 2024-06-17 11:27:56.110986677 +0000
	I0617 11:34:06.407578  148753 command_runner.go:130] > Change: 2024-06-17 11:27:56.110986677 +0000
	I0617 11:34:06.407583  148753 command_runner.go:130] >  Birth: 2024-06-17 11:27:56.110986677 +0000
	I0617 11:34:06.407623  148753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 11:34:06.413188  148753 command_runner.go:130] > Certificate will not expire
	I0617 11:34:06.413244  148753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 11:34:06.419035  148753 command_runner.go:130] > Certificate will not expire
	I0617 11:34:06.419074  148753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 11:34:06.424464  148753 command_runner.go:130] > Certificate will not expire
	I0617 11:34:06.424512  148753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 11:34:06.429903  148753 command_runner.go:130] > Certificate will not expire
	I0617 11:34:06.429977  148753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 11:34:06.435144  148753 command_runner.go:130] > Certificate will not expire
	I0617 11:34:06.435346  148753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 11:34:06.440691  148753 command_runner.go:130] > Certificate will not expire
	I0617 11:34:06.440750  148753 kubeadm.go:391] StartCluster: {Name:multinode-353869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-353869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.46 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:34:06.440891  148753 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 11:34:06.440957  148753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 11:34:06.478235  148753 command_runner.go:130] > bb5cdb2e77c18dfb4033f073b3fcc0409800a764db18e2e93eac517885f5dbe4
	I0617 11:34:06.478261  148753 command_runner.go:130] > c1209b62c2e74e6bdf46e660a5944ebf4572603fe1e3f6125bd6533f824858fb
	I0617 11:34:06.478266  148753 command_runner.go:130] > f01b6f8d67c6a06c273316e91a016f1dda9bccd08a3b9f130e3fa18000e3f918
	I0617 11:34:06.478273  148753 command_runner.go:130] > 788f3e95f1389861634b7c167ecc4ed0481a5b23af544e031699d17b73670fc8
	I0617 11:34:06.478278  148753 command_runner.go:130] > e2daedb04756afc271789d6e861aa2906d06a65ced85f3593810d3b7c83242b7
	I0617 11:34:06.478284  148753 command_runner.go:130] > cf374fea65b02f5ed17deacbbfaa890808652f70898fb22613a2aada2d9d182d
	I0617 11:34:06.478289  148753 command_runner.go:130] > 5ab681386325c039d54197059416078c59182aa87b148cd254a9ab95e67be20e
	I0617 11:34:06.478297  148753 command_runner.go:130] > 920ea6bfb6321ca417761a4aacfc34eca33f282901baef10e5ab4e211b318908
	I0617 11:34:06.479768  148753 cri.go:89] found id: "bb5cdb2e77c18dfb4033f073b3fcc0409800a764db18e2e93eac517885f5dbe4"
	I0617 11:34:06.479793  148753 cri.go:89] found id: "c1209b62c2e74e6bdf46e660a5944ebf4572603fe1e3f6125bd6533f824858fb"
	I0617 11:34:06.479799  148753 cri.go:89] found id: "f01b6f8d67c6a06c273316e91a016f1dda9bccd08a3b9f130e3fa18000e3f918"
	I0617 11:34:06.479826  148753 cri.go:89] found id: "788f3e95f1389861634b7c167ecc4ed0481a5b23af544e031699d17b73670fc8"
	I0617 11:34:06.479835  148753 cri.go:89] found id: "e2daedb04756afc271789d6e861aa2906d06a65ced85f3593810d3b7c83242b7"
	I0617 11:34:06.479840  148753 cri.go:89] found id: "cf374fea65b02f5ed17deacbbfaa890808652f70898fb22613a2aada2d9d182d"
	I0617 11:34:06.479844  148753 cri.go:89] found id: "5ab681386325c039d54197059416078c59182aa87b148cd254a9ab95e67be20e"
	I0617 11:34:06.479848  148753 cri.go:89] found id: "920ea6bfb6321ca417761a4aacfc34eca33f282901baef10e5ab4e211b318908"
	I0617 11:34:06.479868  148753 cri.go:89] found id: ""
	I0617 11:34:06.479923  148753 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.159468180Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624127159449515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f24db67-b24b-4445-9edd-e141a352e7fc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.160160052Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dac1836f-d66b-4f2d-81ed-a3bae4c38071 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.160210757Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dac1836f-d66b-4f2d-81ed-a3bae4c38071 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.160536067Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bf62a6b9c5460fc7170bfcabc3b8873429afdc9358e70ed2c0cfc8e13b2909a,PodSandboxId:c286b6ab5ba224c15b8257108630c758e3d39676f06578be2a885368f5e5ce11,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718624087083315856,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9q9xp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b3438b1-3078-4c3d-918d-7ca302c631df,},Annotations:map[string]string{io.kubernetes.container.hash: 1f994b5a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99311a5f2af018094cefffa1d06ab60bd7f9c78720ef0903446410b62777ab1,PodSandboxId:46330c87989ecc2043660078c62b2d9e6d3e75daaa8bbbd1a068fc5abb4a36cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718624053565286943,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8b72m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2ede9293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9296d7496cc7bd08e1aae16d5835d95d67a137b8155cc6ba963ea9ecee410394,PodSandboxId:09449e67a955075eb2ae95ddfe689cf65d001ce3e0d00d9910b566163db37637,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718624053461613631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v7jgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ab078-568f-4d93-a744-f6abffe8e025,},Annotations:map[string]string{io.kubernetes.container.hash: 6fd799b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e4df51e0870da34508fa6131d228646bb0e4b6f39ea875e4cfc0bab53523821,PodSandboxId:79baefacd0e097350fc4e7414620cacf414d39e32cd1f6262192dca1802dec04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718624053376189920,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh4bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad51975b-c6bc-4708-8988-004224379e4e,},Annotations:map[string]
string{io.kubernetes.container.hash: da5fef91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b63dfaed3cb7550393f5b31b562d7204d27fe2679022292b85d9af81fe12da,PodSandboxId:0be7c33d595ba5e06bec4b3dbc4e40b8d1e5971da79c0d0c74096391b17cf465,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718624053413045791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 647d8c6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49d345565617207048c355eb2fd02d84dc1e79374c65582908fc5c31efb6ace2,PodSandboxId:c61391d3fb7623c31e783d7faa3c91d1379b9d6bdbd9ce72dcee6600422fb8ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718624048549499708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7e071f006daa8b88c2389f822775e,},Annotations:map[string]string{io.kubernetes.container.hash: 376600c6,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96338c1781a120bd164d0cf1ee12bf47c1e4614d990ecef15f2019ec1d01a74,PodSandboxId:0b42b8b7168ef0d5bd5cf5c756f8139d59eb896f84e1efefda6582eefc09b322,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718624048560235961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03547e44edb5abf39796dbbc604ea57d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5521b788f9e29eacd3cdb54d74dda1ede012f42edf74635592934f0b5fd94be,PodSandboxId:629147fd888d740082b57e5ebe69088e9078bfd764cf7c73bf14b94a8f0f1667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718624048555574662,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c9fbf3605584b11404e4c74d684666,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa049dc2107d59ff0e82cf0a7a6b0a809afe251d9199dc55b0ba7a182e31ea78,PodSandboxId:307f2884cb6d3ac7e70a5f2dc45ce3540923e4152f2d914a18827464c407ecf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718624048457260321,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccdb7a72133ec1402678e9ea7bf51f8d,},Annotations:map[string]string{io.kubernetes.container.hash: 64cb03a4,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db87229d8ad6756e3a7db9952290dffb752b7dbe5563ef38ce3ec63e639e87b8,PodSandboxId:2879eb5662c5dc9c90ae5eb1b5e4280cb1f5af7eec47cc04f68c4b60065364fd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718623748160187560,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9q9xp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b3438b1-3078-4c3d-918d-7ca302c631df,},Annotations:map[string]string{io.kubernetes.container.hash: 1f994b5a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1209b62c2e74e6bdf46e660a5944ebf4572603fe1e3f6125bd6533f824858fb,PodSandboxId:e36a7741cda4f83965573401b89d4d9c35ae2374a327ab7c64df3c89765bc519,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718623704020718026,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,},Annotations:map[string]string{io.kubernetes.container.hash: 647d8c6e,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5cdb2e77c18dfb4033f073b3fcc0409800a764db18e2e93eac517885f5dbe4,PodSandboxId:fd7b9c01c57a4c7e59d710d65496d12404568692cb323860531374a8f9576c2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718623704047491037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v7jgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ab078-568f-4d93-a744-f6abffe8e025,},Annotations:map[string]string{io.kubernetes.container.hash: 6fd799b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f01b6f8d67c6a06c273316e91a016f1dda9bccd08a3b9f130e3fa18000e3f918,PodSandboxId:42e410b99e9861aaf45f17d31f858cb722e68467cb0591c4a84f8b1158560219,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718623702755119502,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8b72m,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2ede9293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:788f3e95f1389861634b7c167ecc4ed0481a5b23af544e031699d17b73670fc8,PodSandboxId:c634101d4334a5e4e7060cc4fa47040a6b8fdbc4a9f0317d22511ddd39517dc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718623700878267082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh4bq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ad51975b-c6bc-4708-8988-004224379e4e,},Annotations:map[string]string{io.kubernetes.container.hash: da5fef91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2daedb04756afc271789d6e861aa2906d06a65ced85f3593810d3b7c83242b7,PodSandboxId:3e6c167dc3d5a7ec1a743469227888109903a3f7c6396ffbb469dd96425fc126,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718623679799019295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7e071f006daa8b88c2389f82277
5e,},Annotations:map[string]string{io.kubernetes.container.hash: 376600c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab681386325c039d54197059416078c59182aa87b148cd254a9ab95e67be20e,PodSandboxId:060b3c7b4cc6c8a64d6da3ecf010956b4746bc8d16b153949312db9fa7aa845b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718623679768119203,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03547e44edb5abf39796dbbc604ea57d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf374fea65b02f5ed17deacbbfaa890808652f70898fb22613a2aada2d9d182d,PodSandboxId:02ebcc677be0b84fdb13e9bee3c75491f5a9fce544fd304b98324e7555ca4a39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718623679769989035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c9fbf3605584b11404e4c74d684666,
},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920ea6bfb6321ca417761a4aacfc34eca33f282901baef10e5ab4e211b318908,PodSandboxId:4a878aa3e3733782505f0a73f31dc066754029efb7cfd43f8dee63152afaed15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718623679688156743,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccdb7a72133ec1402678e9ea7bf51f8d,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 64cb03a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dac1836f-d66b-4f2d-81ed-a3bae4c38071 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.207697955Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2e3d457-55bc-4991-90f4-a8f0eb56ce8f name=/runtime.v1.RuntimeService/Version
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.207781083Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2e3d457-55bc-4991-90f4-a8f0eb56ce8f name=/runtime.v1.RuntimeService/Version
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.208703318Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4536c440-377d-4bd6-8212-568f187e02cc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.209109993Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624127209087881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4536c440-377d-4bd6-8212-568f187e02cc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.209562561Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=589980a0-2cdb-4a1c-be86-4a27c5759e60 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.209613206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=589980a0-2cdb-4a1c-be86-4a27c5759e60 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.210350685Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bf62a6b9c5460fc7170bfcabc3b8873429afdc9358e70ed2c0cfc8e13b2909a,PodSandboxId:c286b6ab5ba224c15b8257108630c758e3d39676f06578be2a885368f5e5ce11,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718624087083315856,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9q9xp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b3438b1-3078-4c3d-918d-7ca302c631df,},Annotations:map[string]string{io.kubernetes.container.hash: 1f994b5a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99311a5f2af018094cefffa1d06ab60bd7f9c78720ef0903446410b62777ab1,PodSandboxId:46330c87989ecc2043660078c62b2d9e6d3e75daaa8bbbd1a068fc5abb4a36cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718624053565286943,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8b72m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2ede9293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9296d7496cc7bd08e1aae16d5835d95d67a137b8155cc6ba963ea9ecee410394,PodSandboxId:09449e67a955075eb2ae95ddfe689cf65d001ce3e0d00d9910b566163db37637,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718624053461613631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v7jgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ab078-568f-4d93-a744-f6abffe8e025,},Annotations:map[string]string{io.kubernetes.container.hash: 6fd799b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e4df51e0870da34508fa6131d228646bb0e4b6f39ea875e4cfc0bab53523821,PodSandboxId:79baefacd0e097350fc4e7414620cacf414d39e32cd1f6262192dca1802dec04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718624053376189920,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh4bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad51975b-c6bc-4708-8988-004224379e4e,},Annotations:map[string]
string{io.kubernetes.container.hash: da5fef91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b63dfaed3cb7550393f5b31b562d7204d27fe2679022292b85d9af81fe12da,PodSandboxId:0be7c33d595ba5e06bec4b3dbc4e40b8d1e5971da79c0d0c74096391b17cf465,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718624053413045791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 647d8c6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49d345565617207048c355eb2fd02d84dc1e79374c65582908fc5c31efb6ace2,PodSandboxId:c61391d3fb7623c31e783d7faa3c91d1379b9d6bdbd9ce72dcee6600422fb8ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718624048549499708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7e071f006daa8b88c2389f822775e,},Annotations:map[string]string{io.kubernetes.container.hash: 376600c6,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96338c1781a120bd164d0cf1ee12bf47c1e4614d990ecef15f2019ec1d01a74,PodSandboxId:0b42b8b7168ef0d5bd5cf5c756f8139d59eb896f84e1efefda6582eefc09b322,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718624048560235961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03547e44edb5abf39796dbbc604ea57d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5521b788f9e29eacd3cdb54d74dda1ede012f42edf74635592934f0b5fd94be,PodSandboxId:629147fd888d740082b57e5ebe69088e9078bfd764cf7c73bf14b94a8f0f1667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718624048555574662,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c9fbf3605584b11404e4c74d684666,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa049dc2107d59ff0e82cf0a7a6b0a809afe251d9199dc55b0ba7a182e31ea78,PodSandboxId:307f2884cb6d3ac7e70a5f2dc45ce3540923e4152f2d914a18827464c407ecf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718624048457260321,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccdb7a72133ec1402678e9ea7bf51f8d,},Annotations:map[string]string{io.kubernetes.container.hash: 64cb03a4,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db87229d8ad6756e3a7db9952290dffb752b7dbe5563ef38ce3ec63e639e87b8,PodSandboxId:2879eb5662c5dc9c90ae5eb1b5e4280cb1f5af7eec47cc04f68c4b60065364fd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718623748160187560,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9q9xp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b3438b1-3078-4c3d-918d-7ca302c631df,},Annotations:map[string]string{io.kubernetes.container.hash: 1f994b5a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1209b62c2e74e6bdf46e660a5944ebf4572603fe1e3f6125bd6533f824858fb,PodSandboxId:e36a7741cda4f83965573401b89d4d9c35ae2374a327ab7c64df3c89765bc519,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718623704020718026,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,},Annotations:map[string]string{io.kubernetes.container.hash: 647d8c6e,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5cdb2e77c18dfb4033f073b3fcc0409800a764db18e2e93eac517885f5dbe4,PodSandboxId:fd7b9c01c57a4c7e59d710d65496d12404568692cb323860531374a8f9576c2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718623704047491037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v7jgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ab078-568f-4d93-a744-f6abffe8e025,},Annotations:map[string]string{io.kubernetes.container.hash: 6fd799b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f01b6f8d67c6a06c273316e91a016f1dda9bccd08a3b9f130e3fa18000e3f918,PodSandboxId:42e410b99e9861aaf45f17d31f858cb722e68467cb0591c4a84f8b1158560219,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718623702755119502,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8b72m,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2ede9293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:788f3e95f1389861634b7c167ecc4ed0481a5b23af544e031699d17b73670fc8,PodSandboxId:c634101d4334a5e4e7060cc4fa47040a6b8fdbc4a9f0317d22511ddd39517dc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718623700878267082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh4bq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ad51975b-c6bc-4708-8988-004224379e4e,},Annotations:map[string]string{io.kubernetes.container.hash: da5fef91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2daedb04756afc271789d6e861aa2906d06a65ced85f3593810d3b7c83242b7,PodSandboxId:3e6c167dc3d5a7ec1a743469227888109903a3f7c6396ffbb469dd96425fc126,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718623679799019295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7e071f006daa8b88c2389f82277
5e,},Annotations:map[string]string{io.kubernetes.container.hash: 376600c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab681386325c039d54197059416078c59182aa87b148cd254a9ab95e67be20e,PodSandboxId:060b3c7b4cc6c8a64d6da3ecf010956b4746bc8d16b153949312db9fa7aa845b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718623679768119203,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03547e44edb5abf39796dbbc604ea57d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf374fea65b02f5ed17deacbbfaa890808652f70898fb22613a2aada2d9d182d,PodSandboxId:02ebcc677be0b84fdb13e9bee3c75491f5a9fce544fd304b98324e7555ca4a39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718623679769989035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c9fbf3605584b11404e4c74d684666,
},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920ea6bfb6321ca417761a4aacfc34eca33f282901baef10e5ab4e211b318908,PodSandboxId:4a878aa3e3733782505f0a73f31dc066754029efb7cfd43f8dee63152afaed15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718623679688156743,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccdb7a72133ec1402678e9ea7bf51f8d,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 64cb03a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=589980a0-2cdb-4a1c-be86-4a27c5759e60 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.253403434Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f45a4e7-2c4c-4bc9-853e-e17266876035 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.253516412Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f45a4e7-2c4c-4bc9-853e-e17266876035 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.255134850Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ec1c984-319d-4d52-a133-bc3e033a8563 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.255546898Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624127255523707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ec1c984-319d-4d52-a133-bc3e033a8563 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.256271794Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=518d811a-d7ec-429b-a0a4-94de8000c22d name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.256352046Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=518d811a-d7ec-429b-a0a4-94de8000c22d name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.256802275Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bf62a6b9c5460fc7170bfcabc3b8873429afdc9358e70ed2c0cfc8e13b2909a,PodSandboxId:c286b6ab5ba224c15b8257108630c758e3d39676f06578be2a885368f5e5ce11,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718624087083315856,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9q9xp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b3438b1-3078-4c3d-918d-7ca302c631df,},Annotations:map[string]string{io.kubernetes.container.hash: 1f994b5a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99311a5f2af018094cefffa1d06ab60bd7f9c78720ef0903446410b62777ab1,PodSandboxId:46330c87989ecc2043660078c62b2d9e6d3e75daaa8bbbd1a068fc5abb4a36cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718624053565286943,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8b72m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2ede9293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9296d7496cc7bd08e1aae16d5835d95d67a137b8155cc6ba963ea9ecee410394,PodSandboxId:09449e67a955075eb2ae95ddfe689cf65d001ce3e0d00d9910b566163db37637,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718624053461613631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v7jgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ab078-568f-4d93-a744-f6abffe8e025,},Annotations:map[string]string{io.kubernetes.container.hash: 6fd799b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e4df51e0870da34508fa6131d228646bb0e4b6f39ea875e4cfc0bab53523821,PodSandboxId:79baefacd0e097350fc4e7414620cacf414d39e32cd1f6262192dca1802dec04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718624053376189920,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh4bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad51975b-c6bc-4708-8988-004224379e4e,},Annotations:map[string]
string{io.kubernetes.container.hash: da5fef91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b63dfaed3cb7550393f5b31b562d7204d27fe2679022292b85d9af81fe12da,PodSandboxId:0be7c33d595ba5e06bec4b3dbc4e40b8d1e5971da79c0d0c74096391b17cf465,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718624053413045791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 647d8c6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49d345565617207048c355eb2fd02d84dc1e79374c65582908fc5c31efb6ace2,PodSandboxId:c61391d3fb7623c31e783d7faa3c91d1379b9d6bdbd9ce72dcee6600422fb8ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718624048549499708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7e071f006daa8b88c2389f822775e,},Annotations:map[string]string{io.kubernetes.container.hash: 376600c6,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96338c1781a120bd164d0cf1ee12bf47c1e4614d990ecef15f2019ec1d01a74,PodSandboxId:0b42b8b7168ef0d5bd5cf5c756f8139d59eb896f84e1efefda6582eefc09b322,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718624048560235961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03547e44edb5abf39796dbbc604ea57d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5521b788f9e29eacd3cdb54d74dda1ede012f42edf74635592934f0b5fd94be,PodSandboxId:629147fd888d740082b57e5ebe69088e9078bfd764cf7c73bf14b94a8f0f1667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718624048555574662,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c9fbf3605584b11404e4c74d684666,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa049dc2107d59ff0e82cf0a7a6b0a809afe251d9199dc55b0ba7a182e31ea78,PodSandboxId:307f2884cb6d3ac7e70a5f2dc45ce3540923e4152f2d914a18827464c407ecf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718624048457260321,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccdb7a72133ec1402678e9ea7bf51f8d,},Annotations:map[string]string{io.kubernetes.container.hash: 64cb03a4,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db87229d8ad6756e3a7db9952290dffb752b7dbe5563ef38ce3ec63e639e87b8,PodSandboxId:2879eb5662c5dc9c90ae5eb1b5e4280cb1f5af7eec47cc04f68c4b60065364fd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718623748160187560,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9q9xp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b3438b1-3078-4c3d-918d-7ca302c631df,},Annotations:map[string]string{io.kubernetes.container.hash: 1f994b5a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1209b62c2e74e6bdf46e660a5944ebf4572603fe1e3f6125bd6533f824858fb,PodSandboxId:e36a7741cda4f83965573401b89d4d9c35ae2374a327ab7c64df3c89765bc519,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718623704020718026,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,},Annotations:map[string]string{io.kubernetes.container.hash: 647d8c6e,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5cdb2e77c18dfb4033f073b3fcc0409800a764db18e2e93eac517885f5dbe4,PodSandboxId:fd7b9c01c57a4c7e59d710d65496d12404568692cb323860531374a8f9576c2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718623704047491037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v7jgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ab078-568f-4d93-a744-f6abffe8e025,},Annotations:map[string]string{io.kubernetes.container.hash: 6fd799b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f01b6f8d67c6a06c273316e91a016f1dda9bccd08a3b9f130e3fa18000e3f918,PodSandboxId:42e410b99e9861aaf45f17d31f858cb722e68467cb0591c4a84f8b1158560219,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718623702755119502,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8b72m,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2ede9293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:788f3e95f1389861634b7c167ecc4ed0481a5b23af544e031699d17b73670fc8,PodSandboxId:c634101d4334a5e4e7060cc4fa47040a6b8fdbc4a9f0317d22511ddd39517dc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718623700878267082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh4bq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ad51975b-c6bc-4708-8988-004224379e4e,},Annotations:map[string]string{io.kubernetes.container.hash: da5fef91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2daedb04756afc271789d6e861aa2906d06a65ced85f3593810d3b7c83242b7,PodSandboxId:3e6c167dc3d5a7ec1a743469227888109903a3f7c6396ffbb469dd96425fc126,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718623679799019295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7e071f006daa8b88c2389f82277
5e,},Annotations:map[string]string{io.kubernetes.container.hash: 376600c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab681386325c039d54197059416078c59182aa87b148cd254a9ab95e67be20e,PodSandboxId:060b3c7b4cc6c8a64d6da3ecf010956b4746bc8d16b153949312db9fa7aa845b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718623679768119203,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03547e44edb5abf39796dbbc604ea57d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf374fea65b02f5ed17deacbbfaa890808652f70898fb22613a2aada2d9d182d,PodSandboxId:02ebcc677be0b84fdb13e9bee3c75491f5a9fce544fd304b98324e7555ca4a39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718623679769989035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c9fbf3605584b11404e4c74d684666,
},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920ea6bfb6321ca417761a4aacfc34eca33f282901baef10e5ab4e211b318908,PodSandboxId:4a878aa3e3733782505f0a73f31dc066754029efb7cfd43f8dee63152afaed15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718623679688156743,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccdb7a72133ec1402678e9ea7bf51f8d,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 64cb03a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=518d811a-d7ec-429b-a0a4-94de8000c22d name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.301142479Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51b55971-4ef6-4c87-9354-422646c4e365 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.301249780Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51b55971-4ef6-4c87-9354-422646c4e365 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.302732309Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb72a022-de71-4525-a9bb-ce64f86564b6 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.303190294Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624127303154554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb72a022-de71-4525-a9bb-ce64f86564b6 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.303716792Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f61671ee-5818-411d-a838-12f38d766fef name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.303868805Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f61671ee-5818-411d-a838-12f38d766fef name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:35:27 multinode-353869 crio[2886]: time="2024-06-17 11:35:27.304272970Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bf62a6b9c5460fc7170bfcabc3b8873429afdc9358e70ed2c0cfc8e13b2909a,PodSandboxId:c286b6ab5ba224c15b8257108630c758e3d39676f06578be2a885368f5e5ce11,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718624087083315856,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9q9xp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b3438b1-3078-4c3d-918d-7ca302c631df,},Annotations:map[string]string{io.kubernetes.container.hash: 1f994b5a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99311a5f2af018094cefffa1d06ab60bd7f9c78720ef0903446410b62777ab1,PodSandboxId:46330c87989ecc2043660078c62b2d9e6d3e75daaa8bbbd1a068fc5abb4a36cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718624053565286943,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8b72m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2ede9293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9296d7496cc7bd08e1aae16d5835d95d67a137b8155cc6ba963ea9ecee410394,PodSandboxId:09449e67a955075eb2ae95ddfe689cf65d001ce3e0d00d9910b566163db37637,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718624053461613631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v7jgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ab078-568f-4d93-a744-f6abffe8e025,},Annotations:map[string]string{io.kubernetes.container.hash: 6fd799b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e4df51e0870da34508fa6131d228646bb0e4b6f39ea875e4cfc0bab53523821,PodSandboxId:79baefacd0e097350fc4e7414620cacf414d39e32cd1f6262192dca1802dec04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718624053376189920,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh4bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad51975b-c6bc-4708-8988-004224379e4e,},Annotations:map[string]
string{io.kubernetes.container.hash: da5fef91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b63dfaed3cb7550393f5b31b562d7204d27fe2679022292b85d9af81fe12da,PodSandboxId:0be7c33d595ba5e06bec4b3dbc4e40b8d1e5971da79c0d0c74096391b17cf465,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718624053413045791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 647d8c6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49d345565617207048c355eb2fd02d84dc1e79374c65582908fc5c31efb6ace2,PodSandboxId:c61391d3fb7623c31e783d7faa3c91d1379b9d6bdbd9ce72dcee6600422fb8ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718624048549499708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7e071f006daa8b88c2389f822775e,},Annotations:map[string]string{io.kubernetes.container.hash: 376600c6,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96338c1781a120bd164d0cf1ee12bf47c1e4614d990ecef15f2019ec1d01a74,PodSandboxId:0b42b8b7168ef0d5bd5cf5c756f8139d59eb896f84e1efefda6582eefc09b322,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718624048560235961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03547e44edb5abf39796dbbc604ea57d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5521b788f9e29eacd3cdb54d74dda1ede012f42edf74635592934f0b5fd94be,PodSandboxId:629147fd888d740082b57e5ebe69088e9078bfd764cf7c73bf14b94a8f0f1667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718624048555574662,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c9fbf3605584b11404e4c74d684666,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa049dc2107d59ff0e82cf0a7a6b0a809afe251d9199dc55b0ba7a182e31ea78,PodSandboxId:307f2884cb6d3ac7e70a5f2dc45ce3540923e4152f2d914a18827464c407ecf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718624048457260321,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccdb7a72133ec1402678e9ea7bf51f8d,},Annotations:map[string]string{io.kubernetes.container.hash: 64cb03a4,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db87229d8ad6756e3a7db9952290dffb752b7dbe5563ef38ce3ec63e639e87b8,PodSandboxId:2879eb5662c5dc9c90ae5eb1b5e4280cb1f5af7eec47cc04f68c4b60065364fd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718623748160187560,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9q9xp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b3438b1-3078-4c3d-918d-7ca302c631df,},Annotations:map[string]string{io.kubernetes.container.hash: 1f994b5a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1209b62c2e74e6bdf46e660a5944ebf4572603fe1e3f6125bd6533f824858fb,PodSandboxId:e36a7741cda4f83965573401b89d4d9c35ae2374a327ab7c64df3c89765bc519,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718623704020718026,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,},Annotations:map[string]string{io.kubernetes.container.hash: 647d8c6e,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5cdb2e77c18dfb4033f073b3fcc0409800a764db18e2e93eac517885f5dbe4,PodSandboxId:fd7b9c01c57a4c7e59d710d65496d12404568692cb323860531374a8f9576c2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718623704047491037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v7jgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ab078-568f-4d93-a744-f6abffe8e025,},Annotations:map[string]string{io.kubernetes.container.hash: 6fd799b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f01b6f8d67c6a06c273316e91a016f1dda9bccd08a3b9f130e3fa18000e3f918,PodSandboxId:42e410b99e9861aaf45f17d31f858cb722e68467cb0591c4a84f8b1158560219,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718623702755119502,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8b72m,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2ede9293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:788f3e95f1389861634b7c167ecc4ed0481a5b23af544e031699d17b73670fc8,PodSandboxId:c634101d4334a5e4e7060cc4fa47040a6b8fdbc4a9f0317d22511ddd39517dc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718623700878267082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh4bq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ad51975b-c6bc-4708-8988-004224379e4e,},Annotations:map[string]string{io.kubernetes.container.hash: da5fef91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2daedb04756afc271789d6e861aa2906d06a65ced85f3593810d3b7c83242b7,PodSandboxId:3e6c167dc3d5a7ec1a743469227888109903a3f7c6396ffbb469dd96425fc126,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718623679799019295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7e071f006daa8b88c2389f82277
5e,},Annotations:map[string]string{io.kubernetes.container.hash: 376600c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab681386325c039d54197059416078c59182aa87b148cd254a9ab95e67be20e,PodSandboxId:060b3c7b4cc6c8a64d6da3ecf010956b4746bc8d16b153949312db9fa7aa845b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718623679768119203,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03547e44edb5abf39796dbbc604ea57d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf374fea65b02f5ed17deacbbfaa890808652f70898fb22613a2aada2d9d182d,PodSandboxId:02ebcc677be0b84fdb13e9bee3c75491f5a9fce544fd304b98324e7555ca4a39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718623679769989035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c9fbf3605584b11404e4c74d684666,
},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920ea6bfb6321ca417761a4aacfc34eca33f282901baef10e5ab4e211b318908,PodSandboxId:4a878aa3e3733782505f0a73f31dc066754029efb7cfd43f8dee63152afaed15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718623679688156743,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccdb7a72133ec1402678e9ea7bf51f8d,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 64cb03a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f61671ee-5818-411d-a838-12f38d766fef name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4bf62a6b9c546       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      40 seconds ago       Running             busybox                   1                   c286b6ab5ba22       busybox-fc5497c4f-9q9xp
	c99311a5f2af0       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               1                   46330c87989ec       kindnet-8b72m
	9296d7496cc7b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   09449e67a9550       coredns-7db6d8ff4d-v7jgc
	f8b63dfaed3cb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   0be7c33d595ba       storage-provisioner
	8e4df51e0870d       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      About a minute ago   Running             kube-proxy                1                   79baefacd0e09       kube-proxy-lh4bq
	d96338c1781a1       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      About a minute ago   Running             kube-scheduler            1                   0b42b8b7168ef       kube-scheduler-multinode-353869
	b5521b788f9e2       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      About a minute ago   Running             kube-controller-manager   1                   629147fd888d7       kube-controller-manager-multinode-353869
	49d3455656172       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   c61391d3fb762       etcd-multinode-353869
	aa049dc2107d5       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      About a minute ago   Running             kube-apiserver            1                   307f2884cb6d3       kube-apiserver-multinode-353869
	db87229d8ad67       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   2879eb5662c5d       busybox-fc5497c4f-9q9xp
	bb5cdb2e77c18       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   fd7b9c01c57a4       coredns-7db6d8ff4d-v7jgc
	c1209b62c2e74       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   e36a7741cda4f       storage-provisioner
	f01b6f8d67c6a       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    7 minutes ago        Exited              kindnet-cni               0                   42e410b99e986       kindnet-8b72m
	788f3e95f1389       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      7 minutes ago        Exited              kube-proxy                0                   c634101d4334a       kube-proxy-lh4bq
	e2daedb04756a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   3e6c167dc3d5a       etcd-multinode-353869
	cf374fea65b02       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago        Exited              kube-controller-manager   0                   02ebcc677be0b       kube-controller-manager-multinode-353869
	5ab681386325c       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago        Exited              kube-scheduler            0                   060b3c7b4cc6c       kube-scheduler-multinode-353869
	920ea6bfb6321       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago        Exited              kube-apiserver            0                   4a878aa3e3733       kube-apiserver-multinode-353869
	
	
	==> coredns [9296d7496cc7bd08e1aae16d5835d95d67a137b8155cc6ba963ea9ecee410394] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49578 - 34953 "HINFO IN 6714190310131197315.2879280047215912249. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0124875s
	
	
	==> coredns [bb5cdb2e77c18dfb4033f073b3fcc0409800a764db18e2e93eac517885f5dbe4] <==
	[INFO] 10.244.1.2:54915 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001672642s
	[INFO] 10.244.1.2:54152 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00007146s
	[INFO] 10.244.1.2:51062 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103759s
	[INFO] 10.244.1.2:41957 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001250764s
	[INFO] 10.244.1.2:37232 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058333s
	[INFO] 10.244.1.2:48361 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000054854s
	[INFO] 10.244.1.2:39552 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057381s
	[INFO] 10.244.0.3:35057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000074603s
	[INFO] 10.244.0.3:56142 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000052163s
	[INFO] 10.244.0.3:34638 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000034473s
	[INFO] 10.244.0.3:49973 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000403s
	[INFO] 10.244.1.2:48527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153637s
	[INFO] 10.244.1.2:55209 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106422s
	[INFO] 10.244.1.2:51699 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096153s
	[INFO] 10.244.1.2:37049 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067269s
	[INFO] 10.244.0.3:40716 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139894s
	[INFO] 10.244.0.3:60151 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000228445s
	[INFO] 10.244.0.3:38509 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000062757s
	[INFO] 10.244.0.3:34140 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000058042s
	[INFO] 10.244.1.2:48549 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124055s
	[INFO] 10.244.1.2:50145 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085629s
	[INFO] 10.244.1.2:33962 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000070477s
	[INFO] 10.244.1.2:44280 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000197604s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-353869
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-353869
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=multinode-353869
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T11_28_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:28:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-353869
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:35:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:34:11 +0000   Mon, 17 Jun 2024 11:28:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:34:11 +0000   Mon, 17 Jun 2024 11:28:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:34:11 +0000   Mon, 17 Jun 2024 11:28:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:34:11 +0000   Mon, 17 Jun 2024 11:28:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    multinode-353869
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1260748e9f44f50b943a7c29ebbe615
	  System UUID:                c1260748-e9f4-4f50-b943-a7c29ebbe615
	  Boot ID:                    02106cd4-ca66-467d-b16d-bcee11d84f85
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9q9xp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 coredns-7db6d8ff4d-v7jgc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m7s
	  kube-system                 etcd-multinode-353869                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m22s
	  kube-system                 kindnet-8b72m                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m8s
	  kube-system                 kube-apiserver-multinode-353869             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-controller-manager-multinode-353869    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	  kube-system                 kube-proxy-lh4bq                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 kube-scheduler-multinode-353869             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m6s                   kube-proxy       
	  Normal  Starting                 73s                    kube-proxy       
	  Normal  Starting                 7m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m28s (x8 over 7m29s)  kubelet          Node multinode-353869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m28s (x8 over 7m29s)  kubelet          Node multinode-353869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m28s (x7 over 7m29s)  kubelet          Node multinode-353869 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m22s                  kubelet          Node multinode-353869 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  7m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m22s                  kubelet          Node multinode-353869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     7m22s                  kubelet          Node multinode-353869 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m22s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m8s                   node-controller  Node multinode-353869 event: Registered Node multinode-353869 in Controller
	  Normal  NodeReady                7m4s                   kubelet          Node multinode-353869 status is now: NodeReady
	  Normal  Starting                 80s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  80s (x8 over 80s)      kubelet          Node multinode-353869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s (x8 over 80s)      kubelet          Node multinode-353869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s (x7 over 80s)      kubelet          Node multinode-353869 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  80s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           63s                    node-controller  Node multinode-353869 event: Registered Node multinode-353869 in Controller
	
	
	Name:               multinode-353869-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-353869-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=multinode-353869
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_17T11_34_53_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:34:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-353869-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:35:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:35:23 +0000   Mon, 17 Jun 2024 11:34:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:35:23 +0000   Mon, 17 Jun 2024 11:34:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:35:23 +0000   Mon, 17 Jun 2024 11:34:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:35:23 +0000   Mon, 17 Jun 2024 11:34:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.46
	  Hostname:    multinode-353869-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 31b0aa4474a1474bb215c773448e1c71
	  System UUID:                31b0aa44-74a1-474b-b215-c773448e1c71
	  Boot ID:                    efd768df-1f7d-4013-9de3-3660cdbd4baf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gpwz7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 kindnet-stgvs              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m32s
	  kube-system                 kube-proxy-sxh4c           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 31s                    kube-proxy  
	  Normal  Starting                 6m26s                  kube-proxy  
	  Normal  NodeHasNoDiskPressure    6m32s (x2 over 6m32s)  kubelet     Node multinode-353869-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m32s (x2 over 6m32s)  kubelet     Node multinode-353869-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m32s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m32s (x2 over 6m32s)  kubelet     Node multinode-353869-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                6m23s                  kubelet     Node multinode-353869-m02 status is now: NodeReady
	  Normal  Starting                 35s                    kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    35s (x2 over 35s)      kubelet     Node multinode-353869-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x2 over 35s)      kubelet     Node multinode-353869-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  35s (x2 over 35s)      kubelet     Node multinode-353869-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                29s                    kubelet     Node multinode-353869-m02 status is now: NodeReady
	
	
	Name:               multinode-353869-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-353869-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=multinode-353869
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_17T11_35_18_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:35:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-353869-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:35:24 +0000   Mon, 17 Jun 2024 11:35:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:35:24 +0000   Mon, 17 Jun 2024 11:35:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:35:24 +0000   Mon, 17 Jun 2024 11:35:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:35:24 +0000   Mon, 17 Jun 2024 11:35:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.138
	  Hostname:    multinode-353869-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 22c6dafd15c54db993af40c4504ded7c
	  System UUID:                22c6dafd-15c5-4db9-93af-40c4504ded7c
	  Boot ID:                    a2e9b4d0-5303-4200-8ced-4dbdf6ba8c04
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-wjcx6       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m51s
	  kube-system                 kube-proxy-h9qzc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m46s                  kube-proxy  
	  Normal  Starting                 6s                     kube-proxy  
	  Normal  Starting                 5m8s                   kube-proxy  
	  Normal  NodeHasSufficientMemory  5m51s (x2 over 5m51s)  kubelet     Node multinode-353869-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m51s (x2 over 5m51s)  kubelet     Node multinode-353869-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m51s (x2 over 5m51s)  kubelet     Node multinode-353869-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m51s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m43s                  kubelet     Node multinode-353869-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m13s (x2 over 5m13s)  kubelet     Node multinode-353869-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m13s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m13s (x2 over 5m13s)  kubelet     Node multinode-353869-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m13s (x2 over 5m13s)  kubelet     Node multinode-353869-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m6s                   kubelet     Node multinode-353869-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  9s (x2 over 9s)        kubelet     Node multinode-353869-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x2 over 9s)        kubelet     Node multinode-353869-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x2 over 9s)        kubelet     Node multinode-353869-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                     kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-353869-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.056587] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061400] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.171263] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.142452] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.269321] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.125245] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +4.050883] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.062824] kauditd_printk_skb: 158 callbacks suppressed
	[Jun17 11:28] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.078042] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.764929] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.811463] systemd-fstab-generator[1483]: Ignoring "noauto" option for root device
	[Jun17 11:29] kauditd_printk_skb: 82 callbacks suppressed
	[Jun17 11:33] systemd-fstab-generator[2799]: Ignoring "noauto" option for root device
	[  +0.139352] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +0.157664] systemd-fstab-generator[2825]: Ignoring "noauto" option for root device
	[  +0.141511] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +0.289575] systemd-fstab-generator[2865]: Ignoring "noauto" option for root device
	[Jun17 11:34] systemd-fstab-generator[2969]: Ignoring "noauto" option for root device
	[  +0.080243] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.616860] systemd-fstab-generator[3095]: Ignoring "noauto" option for root device
	[  +5.686652] kauditd_printk_skb: 74 callbacks suppressed
	[ +11.833679] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.873293] systemd-fstab-generator[3906]: Ignoring "noauto" option for root device
	[ +19.055507] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [49d345565617207048c355eb2fd02d84dc1e79374c65582908fc5c31efb6ace2] <==
	{"level":"info","ts":"2024-06-17T11:34:09.034106Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-17T11:34:09.034115Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-17T11:34:09.034358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 switched to configuration voters=(2455236677277094933)"}
	{"level":"info","ts":"2024-06-17T11:34:09.034435Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3ecd98d5111bce24","local-member-id":"2212c0bfe49c9415","added-peer-id":"2212c0bfe49c9415","added-peer-peer-urls":["https://192.168.39.17:2380"]}
	{"level":"info","ts":"2024-06-17T11:34:09.034577Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3ecd98d5111bce24","local-member-id":"2212c0bfe49c9415","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:34:09.034618Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:34:09.047493Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-17T11:34:09.047841Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2212c0bfe49c9415","initial-advertise-peer-urls":["https://192.168.39.17:2380"],"listen-peer-urls":["https://192.168.39.17:2380"],"advertise-client-urls":["https://192.168.39.17:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.17:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-17T11:34:09.047895Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-17T11:34:09.048047Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.17:2380"}
	{"level":"info","ts":"2024-06-17T11:34:09.048074Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.17:2380"}
	{"level":"info","ts":"2024-06-17T11:34:10.598584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-17T11:34:10.598762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-17T11:34:10.598832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 received MsgPreVoteResp from 2212c0bfe49c9415 at term 2"}
	{"level":"info","ts":"2024-06-17T11:34:10.598887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 became candidate at term 3"}
	{"level":"info","ts":"2024-06-17T11:34:10.598913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 received MsgVoteResp from 2212c0bfe49c9415 at term 3"}
	{"level":"info","ts":"2024-06-17T11:34:10.598939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 became leader at term 3"}
	{"level":"info","ts":"2024-06-17T11:34:10.598968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2212c0bfe49c9415 elected leader 2212c0bfe49c9415 at term 3"}
	{"level":"info","ts":"2024-06-17T11:34:10.607201Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2212c0bfe49c9415","local-member-attributes":"{Name:multinode-353869 ClientURLs:[https://192.168.39.17:2379]}","request-path":"/0/members/2212c0bfe49c9415/attributes","cluster-id":"3ecd98d5111bce24","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-17T11:34:10.60722Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T11:34:10.607465Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-17T11:34:10.607499Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-17T11:34:10.607249Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T11:34:10.609797Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.17:2379"}
	{"level":"info","ts":"2024-06-17T11:34:10.609806Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [e2daedb04756afc271789d6e861aa2906d06a65ced85f3593810d3b7c83242b7] <==
	{"level":"info","ts":"2024-06-17T11:28:20.008574Z","caller":"traceutil/trace.go:171","msg":"trace[372394324] transaction","detail":"{read_only:false; response_revision:362; number_of_response:1; }","duration":"161.31718ms","start":"2024-06-17T11:28:19.847252Z","end":"2024-06-17T11:28:20.008569Z","steps":["trace[372394324] 'process raft request'  (duration: 161.021674ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T11:28:20.008681Z","caller":"traceutil/trace.go:171","msg":"trace[497822731] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"161.361752ms","start":"2024-06-17T11:28:19.847313Z","end":"2024-06-17T11:28:20.008675Z","steps":["trace[497822731] 'process raft request'  (duration: 160.97444ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T11:28:20.008754Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.591538ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-06-17T11:28:20.008871Z","caller":"traceutil/trace.go:171","msg":"trace[1714599716] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:365; }","duration":"161.752282ms","start":"2024-06-17T11:28:19.84711Z","end":"2024-06-17T11:28:20.008862Z","steps":["trace[1714599716] 'agreement among raft nodes before linearized reading'  (duration: 161.58823ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T11:28:20.009082Z","caller":"traceutil/trace.go:171","msg":"trace[884657230] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"136.112857ms","start":"2024-06-17T11:28:19.872961Z","end":"2024-06-17T11:28:20.009074Z","steps":["trace[884657230] 'process raft request'  (duration: 135.355776ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T11:28:20.00917Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.144241ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2024-06-17T11:28:20.00919Z","caller":"traceutil/trace.go:171","msg":"trace[1061461294] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:365; }","duration":"125.179538ms","start":"2024-06-17T11:28:19.884004Z","end":"2024-06-17T11:28:20.009184Z","steps":["trace[1061461294] 'agreement among raft nodes before linearized reading'  (duration: 125.14839ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T11:28:20.0092Z","caller":"traceutil/trace.go:171","msg":"trace[1865179336] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"125.978459ms","start":"2024-06-17T11:28:19.883217Z","end":"2024-06-17T11:28:20.009195Z","steps":["trace[1865179336] 'process raft request'  (duration: 125.128546ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T11:28:20.008845Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.656752ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-06-17T11:28:20.009257Z","caller":"traceutil/trace.go:171","msg":"trace[1558316807] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:365; }","duration":"126.164242ms","start":"2024-06-17T11:28:19.883088Z","end":"2024-06-17T11:28:20.009253Z","steps":["trace[1558316807] 'agreement among raft nodes before linearized reading'  (duration: 125.659041ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T11:28:55.973733Z","caller":"traceutil/trace.go:171","msg":"trace[1039546806] transaction","detail":"{read_only:false; response_revision:489; number_of_response:1; }","duration":"187.932764ms","start":"2024-06-17T11:28:55.785787Z","end":"2024-06-17T11:28:55.97372Z","steps":["trace[1039546806] 'process raft request'  (duration: 187.891744ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T11:28:55.973951Z","caller":"traceutil/trace.go:171","msg":"trace[917262151] transaction","detail":"{read_only:false; response_revision:488; number_of_response:1; }","duration":"226.101513ms","start":"2024-06-17T11:28:55.747835Z","end":"2024-06-17T11:28:55.973937Z","steps":["trace[917262151] 'process raft request'  (duration: 186.785982ms)","trace[917262151] 'compare'  (duration: 38.758778ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-17T11:29:36.363285Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.799133ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10670593384809681442 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-353869-m03.17d9c737fbd6718c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-353869-m03.17d9c737fbd6718c\" value_size:642 lease:1447221347954905411 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-06-17T11:29:36.363984Z","caller":"traceutil/trace.go:171","msg":"trace[1148999579] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"200.243669ms","start":"2024-06-17T11:29:36.163709Z","end":"2024-06-17T11:29:36.363952Z","steps":["trace[1148999579] 'process raft request'  (duration: 200.019907ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T11:29:36.364009Z","caller":"traceutil/trace.go:171","msg":"trace[1395434200] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"261.258051ms","start":"2024-06-17T11:29:36.102727Z","end":"2024-06-17T11:29:36.363985Z","steps":["trace[1395434200] 'process raft request'  (duration: 77.865863ms)","trace[1395434200] 'compare'  (duration: 181.662985ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-17T11:32:27.192466Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-17T11:32:27.192565Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-353869","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.17:2380"],"advertise-client-urls":["https://192.168.39.17:2379"]}
	{"level":"warn","ts":"2024-06-17T11:32:27.192787Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-17T11:32:27.192958Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-17T11:32:27.271586Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.17:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-17T11:32:27.271766Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.17:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-17T11:32:27.271854Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"2212c0bfe49c9415","current-leader-member-id":"2212c0bfe49c9415"}
	{"level":"info","ts":"2024-06-17T11:32:27.274447Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.17:2380"}
	{"level":"info","ts":"2024-06-17T11:32:27.274607Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.17:2380"}
	{"level":"info","ts":"2024-06-17T11:32:27.274684Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-353869","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.17:2380"],"advertise-client-urls":["https://192.168.39.17:2379"]}
	
	
	==> kernel <==
	 11:35:27 up 7 min,  0 users,  load average: 0.17, 0.21, 0.11
	Linux multinode-353869 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c99311a5f2af018094cefffa1d06ab60bd7f9c78720ef0903446410b62777ab1] <==
	I0617 11:34:44.379294       1 main.go:250] Node multinode-353869-m03 has CIDR [10.244.3.0/24] 
	I0617 11:34:54.392794       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0617 11:34:54.392921       1 main.go:227] handling current node
	I0617 11:34:54.392956       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0617 11:34:54.392995       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	I0617 11:34:54.393172       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0617 11:34:54.393216       1 main.go:250] Node multinode-353869-m03 has CIDR [10.244.3.0/24] 
	I0617 11:35:04.408239       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0617 11:35:04.408332       1 main.go:227] handling current node
	I0617 11:35:04.408342       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0617 11:35:04.408347       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	I0617 11:35:04.408504       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0617 11:35:04.408582       1 main.go:250] Node multinode-353869-m03 has CIDR [10.244.3.0/24] 
	I0617 11:35:14.445940       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0617 11:35:14.446033       1 main.go:227] handling current node
	I0617 11:35:14.446057       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0617 11:35:14.446073       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	I0617 11:35:14.446195       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0617 11:35:14.446214       1 main.go:250] Node multinode-353869-m03 has CIDR [10.244.3.0/24] 
	I0617 11:35:24.459665       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0617 11:35:24.459782       1 main.go:227] handling current node
	I0617 11:35:24.459818       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0617 11:35:24.459836       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	I0617 11:35:24.459979       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0617 11:35:24.460000       1 main.go:250] Node multinode-353869-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [f01b6f8d67c6a06c273316e91a016f1dda9bccd08a3b9f130e3fa18000e3f918] <==
	I0617 11:31:43.709534       1 main.go:250] Node multinode-353869-m03 has CIDR [10.244.3.0/24] 
	I0617 11:31:53.720068       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0617 11:31:53.720182       1 main.go:227] handling current node
	I0617 11:31:53.720207       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0617 11:31:53.720224       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	I0617 11:31:53.720403       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0617 11:31:53.720428       1 main.go:250] Node multinode-353869-m03 has CIDR [10.244.3.0/24] 
	I0617 11:32:03.735273       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0617 11:32:03.735459       1 main.go:227] handling current node
	I0617 11:32:03.735492       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0617 11:32:03.735574       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	I0617 11:32:03.735934       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0617 11:32:03.736026       1 main.go:250] Node multinode-353869-m03 has CIDR [10.244.3.0/24] 
	I0617 11:32:13.747500       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0617 11:32:13.747584       1 main.go:227] handling current node
	I0617 11:32:13.747609       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0617 11:32:13.747680       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	I0617 11:32:13.747833       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0617 11:32:13.747857       1 main.go:250] Node multinode-353869-m03 has CIDR [10.244.3.0/24] 
	I0617 11:32:23.753222       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0617 11:32:23.753518       1 main.go:227] handling current node
	I0617 11:32:23.753681       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0617 11:32:23.753723       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	I0617 11:32:23.753868       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0617 11:32:23.753899       1 main.go:250] Node multinode-353869-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [920ea6bfb6321ca417761a4aacfc34eca33f282901baef10e5ab4e211b318908] <==
	I0617 11:32:27.201895       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0617 11:32:27.201903       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0617 11:32:27.201908       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0617 11:32:27.201914       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0617 11:32:27.201922       1 controller.go:129] Ending legacy_token_tracking_controller
	I0617 11:32:27.220138       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0617 11:32:27.201927       1 available_controller.go:439] Shutting down AvailableConditionController
	I0617 11:32:27.201937       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0617 11:32:27.201943       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	W0617 11:32:27.220518       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.220580       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.220674       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.220705       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.220822       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.221095       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.223065       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.223180       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.223253       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.223304       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.223357       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.223403       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.223437       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.223484       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.223538       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.224121       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [aa049dc2107d59ff0e82cf0a7a6b0a809afe251d9199dc55b0ba7a182e31ea78] <==
	I0617 11:34:11.872595       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0617 11:34:11.872830       1 aggregator.go:165] initial CRD sync complete...
	I0617 11:34:11.872861       1 autoregister_controller.go:141] Starting autoregister controller
	I0617 11:34:11.872883       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0617 11:34:11.824232       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0617 11:34:11.932024       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0617 11:34:11.932951       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0617 11:34:11.933426       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0617 11:34:11.933933       1 shared_informer.go:320] Caches are synced for configmaps
	I0617 11:34:11.934269       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0617 11:34:11.934299       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0617 11:34:11.939086       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0617 11:34:11.939238       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0617 11:34:11.943611       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0617 11:34:11.943688       1 policy_source.go:224] refreshing policies
	I0617 11:34:11.969500       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0617 11:34:11.974355       1 cache.go:39] Caches are synced for autoregister controller
	I0617 11:34:12.838339       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0617 11:34:14.236565       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0617 11:34:14.421016       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0617 11:34:14.444105       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0617 11:34:14.545552       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0617 11:34:14.558889       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0617 11:34:24.934527       1 controller.go:615] quota admission added evaluator for: endpoints
	I0617 11:34:25.084791       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b5521b788f9e29eacd3cdb54d74dda1ede012f42edf74635592934f0b5fd94be] <==
	I0617 11:34:25.416423       1 shared_informer.go:320] Caches are synced for garbage collector
	I0617 11:34:25.416530       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0617 11:34:25.418754       1 shared_informer.go:320] Caches are synced for garbage collector
	I0617 11:34:48.189790       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.448604ms"
	I0617 11:34:48.200577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.366306ms"
	I0617 11:34:48.200713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="92.193µs"
	I0617 11:34:50.944821       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.94µs"
	I0617 11:34:52.646030       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-353869-m02\" does not exist"
	I0617 11:34:52.657688       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-353869-m02" podCIDRs=["10.244.1.0/24"]
	I0617 11:34:53.528418       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.81µs"
	I0617 11:34:53.576429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.293µs"
	I0617 11:34:53.588407       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.968µs"
	I0617 11:34:53.614905       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.269µs"
	I0617 11:34:53.622506       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.392µs"
	I0617 11:34:53.626359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.032µs"
	I0617 11:34:58.678258       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	I0617 11:34:58.696009       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.918µs"
	I0617 11:34:58.707945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.893µs"
	I0617 11:35:00.078145       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.347617ms"
	I0617 11:35:00.078318       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.749µs"
	I0617 11:35:17.046994       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	I0617 11:35:18.111060       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	I0617 11:35:18.112069       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-353869-m03\" does not exist"
	I0617 11:35:18.123136       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-353869-m03" podCIDRs=["10.244.2.0/24"]
	I0617 11:35:24.335519       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	
	
	==> kube-controller-manager [cf374fea65b02f5ed17deacbbfaa890808652f70898fb22613a2aada2d9d182d] <==
	I0617 11:28:24.448239       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.998µs"
	I0617 11:28:55.977716       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-353869-m02\" does not exist"
	I0617 11:28:55.999192       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-353869-m02" podCIDRs=["10.244.1.0/24"]
	I0617 11:28:59.189863       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-353869-m02"
	I0617 11:29:04.468855       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	I0617 11:29:06.687428       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.498756ms"
	I0617 11:29:06.714708       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.206472ms"
	I0617 11:29:06.714809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.598µs"
	I0617 11:29:08.575226       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.784737ms"
	I0617 11:29:08.575319       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.576µs"
	I0617 11:29:08.782594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.624527ms"
	I0617 11:29:08.783251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.134µs"
	I0617 11:29:36.368737       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-353869-m03\" does not exist"
	I0617 11:29:36.369410       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	I0617 11:29:36.381361       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-353869-m03" podCIDRs=["10.244.2.0/24"]
	I0617 11:29:39.208245       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-353869-m03"
	I0617 11:29:44.348893       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m03"
	I0617 11:30:12.916288       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	I0617 11:30:14.166187       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-353869-m03\" does not exist"
	I0617 11:30:14.167441       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	I0617 11:30:14.189594       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-353869-m03" podCIDRs=["10.244.3.0/24"]
	I0617 11:30:21.232407       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	I0617 11:30:59.260621       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m03"
	I0617 11:30:59.276725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.629703ms"
	I0617 11:30:59.276950       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.624µs"
	
	
	==> kube-proxy [788f3e95f1389861634b7c167ecc4ed0481a5b23af544e031699d17b73670fc8] <==
	I0617 11:28:21.015327       1 server_linux.go:69] "Using iptables proxy"
	I0617 11:28:21.023537       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.17"]
	I0617 11:28:21.083716       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0617 11:28:21.083802       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0617 11:28:21.083821       1 server_linux.go:165] "Using iptables Proxier"
	I0617 11:28:21.088571       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 11:28:21.089068       1 server.go:872] "Version info" version="v1.30.1"
	I0617 11:28:21.089113       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:28:21.091156       1 config.go:192] "Starting service config controller"
	I0617 11:28:21.091205       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 11:28:21.091468       1 config.go:101] "Starting endpoint slice config controller"
	I0617 11:28:21.091497       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0617 11:28:21.092816       1 config.go:319] "Starting node config controller"
	I0617 11:28:21.092845       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 11:28:21.192154       1 shared_informer.go:320] Caches are synced for service config
	I0617 11:28:21.192154       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0617 11:28:21.193491       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [8e4df51e0870da34508fa6131d228646bb0e4b6f39ea875e4cfc0bab53523821] <==
	I0617 11:34:13.762808       1 server_linux.go:69] "Using iptables proxy"
	I0617 11:34:13.784880       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.17"]
	I0617 11:34:13.926267       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0617 11:34:13.927828       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0617 11:34:13.927956       1 server_linux.go:165] "Using iptables Proxier"
	I0617 11:34:13.933520       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 11:34:13.933782       1 server.go:872] "Version info" version="v1.30.1"
	I0617 11:34:13.933835       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:34:13.935165       1 config.go:192] "Starting service config controller"
	I0617 11:34:13.935203       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 11:34:13.935231       1 config.go:101] "Starting endpoint slice config controller"
	I0617 11:34:13.935252       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0617 11:34:13.935967       1 config.go:319] "Starting node config controller"
	I0617 11:34:13.935995       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 11:34:14.035724       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0617 11:34:14.035874       1 shared_informer.go:320] Caches are synced for service config
	I0617 11:34:14.036131       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5ab681386325c039d54197059416078c59182aa87b148cd254a9ab95e67be20e] <==
	W0617 11:28:02.623948       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0617 11:28:02.626216       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0617 11:28:03.442122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0617 11:28:03.442173       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0617 11:28:03.453335       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0617 11:28:03.453378       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 11:28:03.526691       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0617 11:28:03.526744       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0617 11:28:03.594831       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0617 11:28:03.595826       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0617 11:28:03.617863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0617 11:28:03.618598       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0617 11:28:03.630195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0617 11:28:03.630308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0617 11:28:03.675187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0617 11:28:03.675357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0617 11:28:03.696083       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0617 11:28:03.697089       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0617 11:28:03.819274       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0617 11:28:03.819361       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0617 11:28:03.874483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0617 11:28:03.874529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0617 11:28:06.013449       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0617 11:32:27.186961       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0617 11:32:27.187598       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d96338c1781a120bd164d0cf1ee12bf47c1e4614d990ecef15f2019ec1d01a74] <==
	I0617 11:34:09.188327       1 serving.go:380] Generated self-signed cert in-memory
	W0617 11:34:11.897125       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0617 11:34:11.897222       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 11:34:11.897232       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0617 11:34:11.897239       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0617 11:34:11.922277       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0617 11:34:11.922324       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:34:11.924139       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0617 11:34:11.924238       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0617 11:34:11.924213       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0617 11:34:11.924238       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0617 11:34:12.024805       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 17 11:34:08 multinode-353869 kubelet[3102]: E0617 11:34:08.729327    3102 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.17:8443: connect: connection refused
	Jun 17 11:34:09 multinode-353869 kubelet[3102]: I0617 11:34:09.355969    3102 kubelet_node_status.go:73] "Attempting to register node" node="multinode-353869"
	Jun 17 11:34:11 multinode-353869 kubelet[3102]: I0617 11:34:11.986580    3102 kubelet_node_status.go:112] "Node was previously registered" node="multinode-353869"
	Jun 17 11:34:11 multinode-353869 kubelet[3102]: I0617 11:34:11.987008    3102 kubelet_node_status.go:76] "Successfully registered node" node="multinode-353869"
	Jun 17 11:34:11 multinode-353869 kubelet[3102]: I0617 11:34:11.988193    3102 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 17 11:34:11 multinode-353869 kubelet[3102]: I0617 11:34:11.989196    3102 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.821440    3102 apiserver.go:52] "Watching apiserver"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.824896    3102 topology_manager.go:215] "Topology Admit Handler" podUID="f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b" podNamespace="kube-system" podName="kindnet-8b72m"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.825222    3102 topology_manager.go:215] "Topology Admit Handler" podUID="ad51975b-c6bc-4708-8988-004224379e4e" podNamespace="kube-system" podName="kube-proxy-lh4bq"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.825343    3102 topology_manager.go:215] "Topology Admit Handler" podUID="6c7ab078-568f-4d93-a744-f6abffe8e025" podNamespace="kube-system" podName="coredns-7db6d8ff4d-v7jgc"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.825431    3102 topology_manager.go:215] "Topology Admit Handler" podUID="41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7" podNamespace="kube-system" podName="storage-provisioner"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.825612    3102 topology_manager.go:215] "Topology Admit Handler" podUID="3b3438b1-3078-4c3d-918d-7ca302c631df" podNamespace="default" podName="busybox-fc5497c4f-9q9xp"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.840605    3102 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.910024    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad51975b-c6bc-4708-8988-004224379e4e-lib-modules\") pod \"kube-proxy-lh4bq\" (UID: \"ad51975b-c6bc-4708-8988-004224379e4e\") " pod="kube-system/kube-proxy-lh4bq"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.910204    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7-tmp\") pod \"storage-provisioner\" (UID: \"41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7\") " pod="kube-system/storage-provisioner"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.910279    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b-cni-cfg\") pod \"kindnet-8b72m\" (UID: \"f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b\") " pod="kube-system/kindnet-8b72m"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.910346    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b-lib-modules\") pod \"kindnet-8b72m\" (UID: \"f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b\") " pod="kube-system/kindnet-8b72m"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.910389    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b-xtables-lock\") pod \"kindnet-8b72m\" (UID: \"f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b\") " pod="kube-system/kindnet-8b72m"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.910429    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad51975b-c6bc-4708-8988-004224379e4e-xtables-lock\") pod \"kube-proxy-lh4bq\" (UID: \"ad51975b-c6bc-4708-8988-004224379e4e\") " pod="kube-system/kube-proxy-lh4bq"
	Jun 17 11:34:21 multinode-353869 kubelet[3102]: I0617 11:34:21.943017    3102 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 17 11:35:07 multinode-353869 kubelet[3102]: E0617 11:35:07.875011    3102 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 11:35:07 multinode-353869 kubelet[3102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 11:35:07 multinode-353869 kubelet[3102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 11:35:07 multinode-353869 kubelet[3102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:35:07 multinode-353869 kubelet[3102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:35:26.876470  149790 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19084-112967/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-353869 -n multinode-353869
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-353869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (304.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 stop
E0617 11:36:51.169593  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 11:37:00.447558  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-353869 stop: exit status 82 (2m0.463072739s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-353869-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-353869 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-353869 status: exit status 3 (18.820816171s)

                                                
                                                
-- stdout --
	multinode-353869
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-353869-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:37:50.155868  150444 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.46:22: connect: no route to host
	E0617 11:37:50.155907  150444 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.46:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-353869 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-353869 -n multinode-353869
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-353869 logs -n 25: (1.424897793s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-353869 ssh -n                                                                 | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-353869 cp multinode-353869-m02:/home/docker/cp-test.txt                       | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869:/home/docker/cp-test_multinode-353869-m02_multinode-353869.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n                                                                 | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n multinode-353869 sudo cat                                       | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | /home/docker/cp-test_multinode-353869-m02_multinode-353869.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-353869 cp multinode-353869-m02:/home/docker/cp-test.txt                       | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m03:/home/docker/cp-test_multinode-353869-m02_multinode-353869-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n                                                                 | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n multinode-353869-m03 sudo cat                                   | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | /home/docker/cp-test_multinode-353869-m02_multinode-353869-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-353869 cp testdata/cp-test.txt                                                | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n                                                                 | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-353869 cp multinode-353869-m03:/home/docker/cp-test.txt                       | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2681374672/001/cp-test_multinode-353869-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n                                                                 | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-353869 cp multinode-353869-m03:/home/docker/cp-test.txt                       | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869:/home/docker/cp-test_multinode-353869-m03_multinode-353869.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n                                                                 | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n multinode-353869 sudo cat                                       | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | /home/docker/cp-test_multinode-353869-m03_multinode-353869.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-353869 cp multinode-353869-m03:/home/docker/cp-test.txt                       | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m02:/home/docker/cp-test_multinode-353869-m03_multinode-353869-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n                                                                 | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n multinode-353869-m02 sudo cat                                   | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | /home/docker/cp-test_multinode-353869-m03_multinode-353869-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-353869 node stop m03                                                          | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	| node    | multinode-353869 node start                                                             | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:30 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-353869                                                                | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:30 UTC |                     |
	| stop    | -p multinode-353869                                                                     | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:30 UTC |                     |
	| start   | -p multinode-353869                                                                     | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:32 UTC | 17 Jun 24 11:35 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-353869                                                                | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC |                     |
	| node    | multinode-353869 node delete                                                            | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC | 17 Jun 24 11:35 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-353869 stop                                                                   | multinode-353869 | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 11:32:25
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 11:32:25.961279  148753 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:32:25.961517  148753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:32:25.961525  148753 out.go:304] Setting ErrFile to fd 2...
	I0617 11:32:25.961530  148753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:32:25.961698  148753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:32:25.962232  148753 out.go:298] Setting JSON to false
	I0617 11:32:25.963127  148753 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4493,"bootTime":1718619453,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:32:25.963186  148753 start.go:139] virtualization: kvm guest
	I0617 11:32:25.965356  148753 out.go:177] * [multinode-353869] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:32:25.966518  148753 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:32:25.967669  148753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:32:25.966545  148753 notify.go:220] Checking for updates...
	I0617 11:32:25.968928  148753 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:32:25.970099  148753 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:32:25.971414  148753 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:32:25.972692  148753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:32:25.974319  148753 config.go:182] Loaded profile config "multinode-353869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:32:25.974432  148753 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:32:25.974870  148753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:32:25.974925  148753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:32:25.990454  148753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0617 11:32:25.990892  148753 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:32:25.991408  148753 main.go:141] libmachine: Using API Version  1
	I0617 11:32:25.991430  148753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:32:25.991792  148753 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:32:25.992109  148753 main.go:141] libmachine: (multinode-353869) Calling .DriverName
	I0617 11:32:26.027385  148753 out.go:177] * Using the kvm2 driver based on existing profile
	I0617 11:32:26.028795  148753 start.go:297] selected driver: kvm2
	I0617 11:32:26.028821  148753 start.go:901] validating driver "kvm2" against &{Name:multinode-353869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-353869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.46 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:32:26.028975  148753 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:32:26.029329  148753 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:32:26.029409  148753 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 11:32:26.045358  148753 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 11:32:26.046115  148753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:32:26.046145  148753 cni.go:84] Creating CNI manager for ""
	I0617 11:32:26.046151  148753 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0617 11:32:26.046222  148753 start.go:340] cluster config:
	{Name:multinode-353869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-353869 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.46 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:32:26.046362  148753 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:32:26.048199  148753 out.go:177] * Starting "multinode-353869" primary control-plane node in "multinode-353869" cluster
	I0617 11:32:26.049406  148753 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:32:26.049439  148753 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 11:32:26.049455  148753 cache.go:56] Caching tarball of preloaded images
	I0617 11:32:26.049549  148753 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:32:26.049564  148753 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 11:32:26.049698  148753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/config.json ...
	I0617 11:32:26.049934  148753 start.go:360] acquireMachinesLock for multinode-353869: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:32:26.049990  148753 start.go:364] duration metric: took 34.941µs to acquireMachinesLock for "multinode-353869"
	I0617 11:32:26.050010  148753 start.go:96] Skipping create...Using existing machine configuration
	I0617 11:32:26.050018  148753 fix.go:54] fixHost starting: 
	I0617 11:32:26.050346  148753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:32:26.050390  148753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:32:26.064992  148753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36797
	I0617 11:32:26.065419  148753 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:32:26.065915  148753 main.go:141] libmachine: Using API Version  1
	I0617 11:32:26.065938  148753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:32:26.066259  148753 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:32:26.066456  148753 main.go:141] libmachine: (multinode-353869) Calling .DriverName
	I0617 11:32:26.066593  148753 main.go:141] libmachine: (multinode-353869) Calling .GetState
	I0617 11:32:26.068150  148753 fix.go:112] recreateIfNeeded on multinode-353869: state=Running err=<nil>
	W0617 11:32:26.068168  148753 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 11:32:26.070223  148753 out.go:177] * Updating the running kvm2 "multinode-353869" VM ...
	I0617 11:32:26.071432  148753 machine.go:94] provisionDockerMachine start ...
	I0617 11:32:26.071470  148753 main.go:141] libmachine: (multinode-353869) Calling .DriverName
	I0617 11:32:26.071674  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:32:26.074212  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.074637  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:32:26.074671  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.074816  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:32:26.075001  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:32:26.075151  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:32:26.075347  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:32:26.075537  148753 main.go:141] libmachine: Using SSH client type: native
	I0617 11:32:26.075716  148753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0617 11:32:26.075725  148753 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 11:32:26.192623  148753 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-353869
	
	I0617 11:32:26.192663  148753 main.go:141] libmachine: (multinode-353869) Calling .GetMachineName
	I0617 11:32:26.192907  148753 buildroot.go:166] provisioning hostname "multinode-353869"
	I0617 11:32:26.192932  148753 main.go:141] libmachine: (multinode-353869) Calling .GetMachineName
	I0617 11:32:26.193235  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:32:26.195603  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.196001  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:32:26.196040  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.196128  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:32:26.196294  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:32:26.196484  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:32:26.196637  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:32:26.196805  148753 main.go:141] libmachine: Using SSH client type: native
	I0617 11:32:26.196992  148753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0617 11:32:26.197010  148753 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-353869 && echo "multinode-353869" | sudo tee /etc/hostname
	I0617 11:32:26.327300  148753 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-353869
	
	I0617 11:32:26.327337  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:32:26.330135  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.330485  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:32:26.330515  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.330676  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:32:26.330870  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:32:26.331010  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:32:26.331149  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:32:26.331336  148753 main.go:141] libmachine: Using SSH client type: native
	I0617 11:32:26.331550  148753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0617 11:32:26.331567  148753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-353869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-353869/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-353869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 11:32:26.444506  148753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:32:26.444543  148753 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 11:32:26.444588  148753 buildroot.go:174] setting up certificates
	I0617 11:32:26.444597  148753 provision.go:84] configureAuth start
	I0617 11:32:26.444610  148753 main.go:141] libmachine: (multinode-353869) Calling .GetMachineName
	I0617 11:32:26.444922  148753 main.go:141] libmachine: (multinode-353869) Calling .GetIP
	I0617 11:32:26.447482  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.447841  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:32:26.447861  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.448025  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:32:26.449996  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.450370  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:32:26.450397  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.450498  148753 provision.go:143] copyHostCerts
	I0617 11:32:26.450530  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:32:26.450583  148753 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 11:32:26.450595  148753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:32:26.450670  148753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 11:32:26.450763  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:32:26.450788  148753 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 11:32:26.450794  148753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:32:26.450834  148753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 11:32:26.450895  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:32:26.450933  148753 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 11:32:26.450942  148753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:32:26.450976  148753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 11:32:26.451055  148753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.multinode-353869 san=[127.0.0.1 192.168.39.17 localhost minikube multinode-353869]
	I0617 11:32:26.887475  148753 provision.go:177] copyRemoteCerts
	I0617 11:32:26.887550  148753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 11:32:26.887582  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:32:26.890524  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.890883  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:32:26.890918  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:26.891108  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:32:26.891326  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:32:26.891513  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:32:26.891677  148753 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/multinode-353869/id_rsa Username:docker}
	I0617 11:32:26.977897  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0617 11:32:26.977974  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 11:32:27.002624  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0617 11:32:27.002692  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0617 11:32:27.027368  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0617 11:32:27.027435  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 11:32:27.051585  148753 provision.go:87] duration metric: took 606.973497ms to configureAuth
	I0617 11:32:27.051610  148753 buildroot.go:189] setting minikube options for container-runtime
	I0617 11:32:27.051850  148753 config.go:182] Loaded profile config "multinode-353869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:32:27.051959  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:32:27.055006  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:27.055367  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:32:27.055400  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:32:27.055622  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:32:27.055836  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:32:27.056062  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:32:27.056184  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:32:27.056389  148753 main.go:141] libmachine: Using SSH client type: native
	I0617 11:32:27.056567  148753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0617 11:32:27.056588  148753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 11:33:57.781443  148753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 11:33:57.781476  148753 machine.go:97] duration metric: took 1m31.710026688s to provisionDockerMachine
	I0617 11:33:57.781497  148753 start.go:293] postStartSetup for "multinode-353869" (driver="kvm2")
	I0617 11:33:57.781509  148753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 11:33:57.781532  148753 main.go:141] libmachine: (multinode-353869) Calling .DriverName
	I0617 11:33:57.781891  148753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 11:33:57.781930  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:33:57.785057  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:57.785539  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:33:57.785568  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:57.785720  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:33:57.785974  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:33:57.786154  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:33:57.786293  148753 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/multinode-353869/id_rsa Username:docker}
	I0617 11:33:57.875173  148753 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 11:33:57.879441  148753 command_runner.go:130] > NAME=Buildroot
	I0617 11:33:57.879477  148753 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0617 11:33:57.879483  148753 command_runner.go:130] > ID=buildroot
	I0617 11:33:57.879490  148753 command_runner.go:130] > VERSION_ID=2023.02.9
	I0617 11:33:57.879497  148753 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0617 11:33:57.879541  148753 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 11:33:57.879557  148753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 11:33:57.879624  148753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 11:33:57.879718  148753 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 11:33:57.879729  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /etc/ssl/certs/1201742.pem
	I0617 11:33:57.879857  148753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 11:33:57.888732  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:33:57.912636  148753 start.go:296] duration metric: took 131.122511ms for postStartSetup
	I0617 11:33:57.912677  148753 fix.go:56] duration metric: took 1m31.862660345s for fixHost
	I0617 11:33:57.912704  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:33:57.915219  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:57.915610  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:33:57.915655  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:57.915829  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:33:57.916155  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:33:57.916330  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:33:57.916464  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:33:57.916625  148753 main.go:141] libmachine: Using SSH client type: native
	I0617 11:33:57.916819  148753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0617 11:33:57.916833  148753 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 11:33:58.032688  148753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718624038.012948646
	
	I0617 11:33:58.032713  148753 fix.go:216] guest clock: 1718624038.012948646
	I0617 11:33:58.032720  148753 fix.go:229] Guest: 2024-06-17 11:33:58.012948646 +0000 UTC Remote: 2024-06-17 11:33:57.912682426 +0000 UTC m=+91.988111464 (delta=100.26622ms)
	I0617 11:33:58.032741  148753 fix.go:200] guest clock delta is within tolerance: 100.26622ms
	I0617 11:33:58.032745  148753 start.go:83] releasing machines lock for "multinode-353869", held for 1m31.982742914s
	I0617 11:33:58.032766  148753 main.go:141] libmachine: (multinode-353869) Calling .DriverName
	I0617 11:33:58.033031  148753 main.go:141] libmachine: (multinode-353869) Calling .GetIP
	I0617 11:33:58.035523  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:58.035877  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:33:58.035906  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:58.036066  148753 main.go:141] libmachine: (multinode-353869) Calling .DriverName
	I0617 11:33:58.036549  148753 main.go:141] libmachine: (multinode-353869) Calling .DriverName
	I0617 11:33:58.036725  148753 main.go:141] libmachine: (multinode-353869) Calling .DriverName
	I0617 11:33:58.036789  148753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 11:33:58.036849  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:33:58.036958  148753 ssh_runner.go:195] Run: cat /version.json
	I0617 11:33:58.036984  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:33:58.039633  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:58.039662  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:58.040009  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:33:58.040042  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:58.040069  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:33:58.040087  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:33:58.040127  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:33:58.040323  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:33:58.040324  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:33:58.040524  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:33:58.040527  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:33:58.040706  148753 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:33:58.040698  148753 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/multinode-353869/id_rsa Username:docker}
	I0617 11:33:58.040804  148753 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/multinode-353869/id_rsa Username:docker}
	I0617 11:33:58.145679  148753 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0617 11:33:58.146469  148753 command_runner.go:130] > {"iso_version": "v1.33.1-1718047936-19044", "kicbase_version": "v0.0.44-1718016726-19044", "minikube_version": "v1.33.1", "commit": "8a07c05cb41cba41fd6bf6981cdae9c899c82330"}
	I0617 11:33:58.146637  148753 ssh_runner.go:195] Run: systemctl --version
	I0617 11:33:58.152680  148753 command_runner.go:130] > systemd 252 (252)
	I0617 11:33:58.152712  148753 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0617 11:33:58.153058  148753 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 11:33:58.322242  148753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0617 11:33:58.328696  148753 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0617 11:33:58.328745  148753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 11:33:58.328792  148753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 11:33:58.338502  148753 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0617 11:33:58.338522  148753 start.go:494] detecting cgroup driver to use...
	I0617 11:33:58.338580  148753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 11:33:58.356862  148753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 11:33:58.372505  148753 docker.go:217] disabling cri-docker service (if available) ...
	I0617 11:33:58.372556  148753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 11:33:58.386275  148753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 11:33:58.399935  148753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 11:33:58.540508  148753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 11:33:58.672719  148753 docker.go:233] disabling docker service ...
	I0617 11:33:58.672808  148753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 11:33:58.688012  148753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 11:33:58.701122  148753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 11:33:58.833952  148753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 11:33:58.975681  148753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 11:33:58.990913  148753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 11:33:59.010824  148753 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0617 11:33:59.010907  148753 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 11:33:59.010965  148753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:33:59.022161  148753 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 11:33:59.022228  148753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:33:59.033915  148753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:33:59.045436  148753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:33:59.056522  148753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 11:33:59.068798  148753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:33:59.079937  148753 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:33:59.091408  148753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:33:59.102243  148753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 11:33:59.111775  148753 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0617 11:33:59.111860  148753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 11:33:59.121433  148753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:33:59.271819  148753 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 11:34:05.518896  148753 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.247037845s)
	I0617 11:34:05.518926  148753 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 11:34:05.518981  148753 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 11:34:05.523890  148753 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0617 11:34:05.523921  148753 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0617 11:34:05.523931  148753 command_runner.go:130] > Device: 0,22	Inode: 1329        Links: 1
	I0617 11:34:05.523942  148753 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0617 11:34:05.523952  148753 command_runner.go:130] > Access: 2024-06-17 11:34:05.398911187 +0000
	I0617 11:34:05.523963  148753 command_runner.go:130] > Modify: 2024-06-17 11:34:05.398911187 +0000
	I0617 11:34:05.523975  148753 command_runner.go:130] > Change: 2024-06-17 11:34:05.398911187 +0000
	I0617 11:34:05.523981  148753 command_runner.go:130] >  Birth: -
	I0617 11:34:05.524003  148753 start.go:562] Will wait 60s for crictl version
	I0617 11:34:05.524051  148753 ssh_runner.go:195] Run: which crictl
	I0617 11:34:05.527736  148753 command_runner.go:130] > /usr/bin/crictl
	I0617 11:34:05.527797  148753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 11:34:05.567279  148753 command_runner.go:130] > Version:  0.1.0
	I0617 11:34:05.567305  148753 command_runner.go:130] > RuntimeName:  cri-o
	I0617 11:34:05.567335  148753 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0617 11:34:05.567346  148753 command_runner.go:130] > RuntimeApiVersion:  v1
	I0617 11:34:05.567367  148753 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 11:34:05.567436  148753 ssh_runner.go:195] Run: crio --version
	I0617 11:34:05.595921  148753 command_runner.go:130] > crio version 1.29.1
	I0617 11:34:05.595948  148753 command_runner.go:130] > Version:        1.29.1
	I0617 11:34:05.595956  148753 command_runner.go:130] > GitCommit:      unknown
	I0617 11:34:05.595963  148753 command_runner.go:130] > GitCommitDate:  unknown
	I0617 11:34:05.595975  148753 command_runner.go:130] > GitTreeState:   clean
	I0617 11:34:05.595984  148753 command_runner.go:130] > BuildDate:      2024-06-11T00:56:20Z
	I0617 11:34:05.595993  148753 command_runner.go:130] > GoVersion:      go1.21.6
	I0617 11:34:05.596008  148753 command_runner.go:130] > Compiler:       gc
	I0617 11:34:05.596018  148753 command_runner.go:130] > Platform:       linux/amd64
	I0617 11:34:05.596027  148753 command_runner.go:130] > Linkmode:       dynamic
	I0617 11:34:05.596037  148753 command_runner.go:130] > BuildTags:      
	I0617 11:34:05.596046  148753 command_runner.go:130] >   containers_image_ostree_stub
	I0617 11:34:05.596053  148753 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0617 11:34:05.596057  148753 command_runner.go:130] >   btrfs_noversion
	I0617 11:34:05.596062  148753 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0617 11:34:05.596066  148753 command_runner.go:130] >   libdm_no_deferred_remove
	I0617 11:34:05.596070  148753 command_runner.go:130] >   seccomp
	I0617 11:34:05.596074  148753 command_runner.go:130] > LDFlags:          unknown
	I0617 11:34:05.596080  148753 command_runner.go:130] > SeccompEnabled:   true
	I0617 11:34:05.596085  148753 command_runner.go:130] > AppArmorEnabled:  false
	I0617 11:34:05.597093  148753 ssh_runner.go:195] Run: crio --version
	I0617 11:34:05.630174  148753 command_runner.go:130] > crio version 1.29.1
	I0617 11:34:05.630196  148753 command_runner.go:130] > Version:        1.29.1
	I0617 11:34:05.630204  148753 command_runner.go:130] > GitCommit:      unknown
	I0617 11:34:05.630209  148753 command_runner.go:130] > GitCommitDate:  unknown
	I0617 11:34:05.630215  148753 command_runner.go:130] > GitTreeState:   clean
	I0617 11:34:05.630224  148753 command_runner.go:130] > BuildDate:      2024-06-11T00:56:20Z
	I0617 11:34:05.630231  148753 command_runner.go:130] > GoVersion:      go1.21.6
	I0617 11:34:05.630239  148753 command_runner.go:130] > Compiler:       gc
	I0617 11:34:05.630247  148753 command_runner.go:130] > Platform:       linux/amd64
	I0617 11:34:05.630254  148753 command_runner.go:130] > Linkmode:       dynamic
	I0617 11:34:05.630262  148753 command_runner.go:130] > BuildTags:      
	I0617 11:34:05.630273  148753 command_runner.go:130] >   containers_image_ostree_stub
	I0617 11:34:05.630281  148753 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0617 11:34:05.630288  148753 command_runner.go:130] >   btrfs_noversion
	I0617 11:34:05.630296  148753 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0617 11:34:05.630307  148753 command_runner.go:130] >   libdm_no_deferred_remove
	I0617 11:34:05.630314  148753 command_runner.go:130] >   seccomp
	I0617 11:34:05.630321  148753 command_runner.go:130] > LDFlags:          unknown
	I0617 11:34:05.630328  148753 command_runner.go:130] > SeccompEnabled:   true
	I0617 11:34:05.630336  148753 command_runner.go:130] > AppArmorEnabled:  false
	I0617 11:34:05.633489  148753 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 11:34:05.634834  148753 main.go:141] libmachine: (multinode-353869) Calling .GetIP
	I0617 11:34:05.637476  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:34:05.637783  148753 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:34:05.637802  148753 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:34:05.638041  148753 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 11:34:05.642196  148753 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0617 11:34:05.642337  148753 kubeadm.go:877] updating cluster {Name:multinode-353869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-353869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.46 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 11:34:05.642486  148753 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:34:05.642532  148753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:34:05.693554  148753 command_runner.go:130] > {
	I0617 11:34:05.693584  148753 command_runner.go:130] >   "images": [
	I0617 11:34:05.693591  148753 command_runner.go:130] >     {
	I0617 11:34:05.693600  148753 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0617 11:34:05.693605  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.693611  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0617 11:34:05.693615  148753 command_runner.go:130] >       ],
	I0617 11:34:05.693619  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.693626  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0617 11:34:05.693633  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0617 11:34:05.693637  148753 command_runner.go:130] >       ],
	I0617 11:34:05.693642  148753 command_runner.go:130] >       "size": "65291810",
	I0617 11:34:05.693652  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.693660  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.693675  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.693682  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.693690  148753 command_runner.go:130] >     },
	I0617 11:34:05.693693  148753 command_runner.go:130] >     {
	I0617 11:34:05.693700  148753 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0617 11:34:05.693704  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.693710  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0617 11:34:05.693714  148753 command_runner.go:130] >       ],
	I0617 11:34:05.693718  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.693726  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0617 11:34:05.693740  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0617 11:34:05.693750  148753 command_runner.go:130] >       ],
	I0617 11:34:05.693757  148753 command_runner.go:130] >       "size": "65908273",
	I0617 11:34:05.693762  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.693776  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.693786  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.693792  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.693799  148753 command_runner.go:130] >     },
	I0617 11:34:05.693803  148753 command_runner.go:130] >     {
	I0617 11:34:05.693811  148753 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0617 11:34:05.693817  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.693829  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0617 11:34:05.693841  148753 command_runner.go:130] >       ],
	I0617 11:34:05.693850  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.693864  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0617 11:34:05.693879  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0617 11:34:05.693887  148753 command_runner.go:130] >       ],
	I0617 11:34:05.693893  148753 command_runner.go:130] >       "size": "1363676",
	I0617 11:34:05.693899  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.693907  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.693916  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.693926  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.693932  148753 command_runner.go:130] >     },
	I0617 11:34:05.693941  148753 command_runner.go:130] >     {
	I0617 11:34:05.693954  148753 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0617 11:34:05.693963  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.693973  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0617 11:34:05.693980  148753 command_runner.go:130] >       ],
	I0617 11:34:05.693985  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.694002  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0617 11:34:05.694029  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0617 11:34:05.694039  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694046  148753 command_runner.go:130] >       "size": "31470524",
	I0617 11:34:05.694060  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.694067  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.694072  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.694083  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.694091  148753 command_runner.go:130] >     },
	I0617 11:34:05.694098  148753 command_runner.go:130] >     {
	I0617 11:34:05.694111  148753 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0617 11:34:05.694120  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.694132  148753 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0617 11:34:05.694141  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694148  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.694156  148753 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0617 11:34:05.694171  148753 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0617 11:34:05.694181  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694191  148753 command_runner.go:130] >       "size": "61245718",
	I0617 11:34:05.694199  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.694209  148753 command_runner.go:130] >       "username": "nonroot",
	I0617 11:34:05.694218  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.694227  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.694234  148753 command_runner.go:130] >     },
	I0617 11:34:05.694237  148753 command_runner.go:130] >     {
	I0617 11:34:05.694250  148753 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0617 11:34:05.694260  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.694271  148753 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0617 11:34:05.694280  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694289  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.694303  148753 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0617 11:34:05.694316  148753 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0617 11:34:05.694322  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694328  148753 command_runner.go:130] >       "size": "150779692",
	I0617 11:34:05.694337  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.694348  148753 command_runner.go:130] >         "value": "0"
	I0617 11:34:05.694358  148753 command_runner.go:130] >       },
	I0617 11:34:05.694367  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.694377  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.694386  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.694394  148753 command_runner.go:130] >     },
	I0617 11:34:05.694400  148753 command_runner.go:130] >     {
	I0617 11:34:05.694409  148753 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0617 11:34:05.694418  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.694431  148753 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0617 11:34:05.694439  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694446  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.694461  148753 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0617 11:34:05.694476  148753 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0617 11:34:05.694485  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694490  148753 command_runner.go:130] >       "size": "117601759",
	I0617 11:34:05.694494  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.694499  148753 command_runner.go:130] >         "value": "0"
	I0617 11:34:05.694508  148753 command_runner.go:130] >       },
	I0617 11:34:05.694518  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.694527  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.694536  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.694546  148753 command_runner.go:130] >     },
	I0617 11:34:05.694554  148753 command_runner.go:130] >     {
	I0617 11:34:05.694564  148753 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0617 11:34:05.694572  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.694580  148753 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0617 11:34:05.694585  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694593  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.694618  148753 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0617 11:34:05.694633  148753 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0617 11:34:05.694641  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694648  148753 command_runner.go:130] >       "size": "112170310",
	I0617 11:34:05.694656  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.694667  148753 command_runner.go:130] >         "value": "0"
	I0617 11:34:05.694676  148753 command_runner.go:130] >       },
	I0617 11:34:05.694684  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.694691  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.694698  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.694702  148753 command_runner.go:130] >     },
	I0617 11:34:05.694708  148753 command_runner.go:130] >     {
	I0617 11:34:05.694718  148753 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0617 11:34:05.694725  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.694733  148753 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0617 11:34:05.694742  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694748  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.694765  148753 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0617 11:34:05.694780  148753 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0617 11:34:05.694789  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694796  148753 command_runner.go:130] >       "size": "85933465",
	I0617 11:34:05.694805  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.694814  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.694824  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.694834  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.694845  148753 command_runner.go:130] >     },
	I0617 11:34:05.694850  148753 command_runner.go:130] >     {
	I0617 11:34:05.694860  148753 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0617 11:34:05.694867  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.694875  148753 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0617 11:34:05.694881  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694887  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.694902  148753 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0617 11:34:05.694917  148753 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0617 11:34:05.694928  148753 command_runner.go:130] >       ],
	I0617 11:34:05.694936  148753 command_runner.go:130] >       "size": "63026504",
	I0617 11:34:05.694945  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.694952  148753 command_runner.go:130] >         "value": "0"
	I0617 11:34:05.694961  148753 command_runner.go:130] >       },
	I0617 11:34:05.694968  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.694975  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.694981  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.694990  148753 command_runner.go:130] >     },
	I0617 11:34:05.694996  148753 command_runner.go:130] >     {
	I0617 11:34:05.695006  148753 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0617 11:34:05.695020  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.695031  148753 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0617 11:34:05.695038  148753 command_runner.go:130] >       ],
	I0617 11:34:05.695049  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.695063  148753 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0617 11:34:05.695077  148753 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0617 11:34:05.695086  148753 command_runner.go:130] >       ],
	I0617 11:34:05.695095  148753 command_runner.go:130] >       "size": "750414",
	I0617 11:34:05.695103  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.695107  148753 command_runner.go:130] >         "value": "65535"
	I0617 11:34:05.695114  148753 command_runner.go:130] >       },
	I0617 11:34:05.695125  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.695135  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.695141  148753 command_runner.go:130] >       "pinned": true
	I0617 11:34:05.695150  148753 command_runner.go:130] >     }
	I0617 11:34:05.695158  148753 command_runner.go:130] >   ]
	I0617 11:34:05.695163  148753 command_runner.go:130] > }
	I0617 11:34:05.695371  148753 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 11:34:05.695384  148753 crio.go:433] Images already preloaded, skipping extraction
	I0617 11:34:05.695436  148753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:34:05.727339  148753 command_runner.go:130] > {
	I0617 11:34:05.727358  148753 command_runner.go:130] >   "images": [
	I0617 11:34:05.727362  148753 command_runner.go:130] >     {
	I0617 11:34:05.727372  148753 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0617 11:34:05.727377  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.727382  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0617 11:34:05.727386  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727390  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.727399  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0617 11:34:05.727406  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0617 11:34:05.727410  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727416  148753 command_runner.go:130] >       "size": "65291810",
	I0617 11:34:05.727429  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.727436  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.727441  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.727445  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.727448  148753 command_runner.go:130] >     },
	I0617 11:34:05.727452  148753 command_runner.go:130] >     {
	I0617 11:34:05.727476  148753 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0617 11:34:05.727483  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.727495  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0617 11:34:05.727501  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727511  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.727521  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0617 11:34:05.727529  148753 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0617 11:34:05.727533  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727539  148753 command_runner.go:130] >       "size": "65908273",
	I0617 11:34:05.727542  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.727549  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.727553  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.727557  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.727560  148753 command_runner.go:130] >     },
	I0617 11:34:05.727565  148753 command_runner.go:130] >     {
	I0617 11:34:05.727575  148753 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0617 11:34:05.727585  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.727592  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0617 11:34:05.727602  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727610  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.727621  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0617 11:34:05.727632  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0617 11:34:05.727638  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727643  148753 command_runner.go:130] >       "size": "1363676",
	I0617 11:34:05.727649  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.727653  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.727660  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.727664  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.727667  148753 command_runner.go:130] >     },
	I0617 11:34:05.727673  148753 command_runner.go:130] >     {
	I0617 11:34:05.727679  148753 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0617 11:34:05.727685  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.727690  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0617 11:34:05.727696  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727703  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.727713  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0617 11:34:05.727727  148753 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0617 11:34:05.727733  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727737  148753 command_runner.go:130] >       "size": "31470524",
	I0617 11:34:05.727743  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.727748  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.727754  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.727758  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.727764  148753 command_runner.go:130] >     },
	I0617 11:34:05.727772  148753 command_runner.go:130] >     {
	I0617 11:34:05.727781  148753 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0617 11:34:05.727785  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.727790  148753 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0617 11:34:05.727796  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727800  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.727810  148753 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0617 11:34:05.727819  148753 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0617 11:34:05.727825  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727829  148753 command_runner.go:130] >       "size": "61245718",
	I0617 11:34:05.727835  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.727840  148753 command_runner.go:130] >       "username": "nonroot",
	I0617 11:34:05.727846  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.727857  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.727862  148753 command_runner.go:130] >     },
	I0617 11:34:05.727865  148753 command_runner.go:130] >     {
	I0617 11:34:05.727872  148753 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0617 11:34:05.727878  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.727882  148753 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0617 11:34:05.727888  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727892  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.727901  148753 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0617 11:34:05.727910  148753 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0617 11:34:05.727916  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727920  148753 command_runner.go:130] >       "size": "150779692",
	I0617 11:34:05.727926  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.727930  148753 command_runner.go:130] >         "value": "0"
	I0617 11:34:05.727936  148753 command_runner.go:130] >       },
	I0617 11:34:05.727940  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.727946  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.727951  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.727957  148753 command_runner.go:130] >     },
	I0617 11:34:05.727960  148753 command_runner.go:130] >     {
	I0617 11:34:05.727966  148753 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0617 11:34:05.727972  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.727977  148753 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0617 11:34:05.727983  148753 command_runner.go:130] >       ],
	I0617 11:34:05.727987  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.727996  148753 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0617 11:34:05.728005  148753 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0617 11:34:05.728011  148753 command_runner.go:130] >       ],
	I0617 11:34:05.728016  148753 command_runner.go:130] >       "size": "117601759",
	I0617 11:34:05.728021  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.728025  148753 command_runner.go:130] >         "value": "0"
	I0617 11:34:05.728031  148753 command_runner.go:130] >       },
	I0617 11:34:05.728035  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.728041  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.728045  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.728051  148753 command_runner.go:130] >     },
	I0617 11:34:05.728055  148753 command_runner.go:130] >     {
	I0617 11:34:05.728063  148753 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0617 11:34:05.728069  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.728075  148753 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0617 11:34:05.728081  148753 command_runner.go:130] >       ],
	I0617 11:34:05.728085  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.728100  148753 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0617 11:34:05.728110  148753 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0617 11:34:05.728116  148753 command_runner.go:130] >       ],
	I0617 11:34:05.728120  148753 command_runner.go:130] >       "size": "112170310",
	I0617 11:34:05.728126  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.728130  148753 command_runner.go:130] >         "value": "0"
	I0617 11:34:05.728133  148753 command_runner.go:130] >       },
	I0617 11:34:05.728139  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.728144  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.728150  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.728153  148753 command_runner.go:130] >     },
	I0617 11:34:05.728159  148753 command_runner.go:130] >     {
	I0617 11:34:05.728164  148753 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0617 11:34:05.728170  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.728175  148753 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0617 11:34:05.728181  148753 command_runner.go:130] >       ],
	I0617 11:34:05.728185  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.728194  148753 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0617 11:34:05.728204  148753 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0617 11:34:05.728209  148753 command_runner.go:130] >       ],
	I0617 11:34:05.728213  148753 command_runner.go:130] >       "size": "85933465",
	I0617 11:34:05.728219  148753 command_runner.go:130] >       "uid": null,
	I0617 11:34:05.728223  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.728230  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.728233  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.728239  148753 command_runner.go:130] >     },
	I0617 11:34:05.728242  148753 command_runner.go:130] >     {
	I0617 11:34:05.728250  148753 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0617 11:34:05.728256  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.728261  148753 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0617 11:34:05.728268  148753 command_runner.go:130] >       ],
	I0617 11:34:05.728272  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.728280  148753 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0617 11:34:05.728289  148753 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0617 11:34:05.728295  148753 command_runner.go:130] >       ],
	I0617 11:34:05.728299  148753 command_runner.go:130] >       "size": "63026504",
	I0617 11:34:05.728306  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.728309  148753 command_runner.go:130] >         "value": "0"
	I0617 11:34:05.728315  148753 command_runner.go:130] >       },
	I0617 11:34:05.728318  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.728325  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.728330  148753 command_runner.go:130] >       "pinned": false
	I0617 11:34:05.728336  148753 command_runner.go:130] >     },
	I0617 11:34:05.728339  148753 command_runner.go:130] >     {
	I0617 11:34:05.728350  148753 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0617 11:34:05.728356  148753 command_runner.go:130] >       "repoTags": [
	I0617 11:34:05.728360  148753 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0617 11:34:05.728366  148753 command_runner.go:130] >       ],
	I0617 11:34:05.728370  148753 command_runner.go:130] >       "repoDigests": [
	I0617 11:34:05.728379  148753 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0617 11:34:05.728388  148753 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0617 11:34:05.728394  148753 command_runner.go:130] >       ],
	I0617 11:34:05.728398  148753 command_runner.go:130] >       "size": "750414",
	I0617 11:34:05.728402  148753 command_runner.go:130] >       "uid": {
	I0617 11:34:05.728408  148753 command_runner.go:130] >         "value": "65535"
	I0617 11:34:05.728412  148753 command_runner.go:130] >       },
	I0617 11:34:05.728418  148753 command_runner.go:130] >       "username": "",
	I0617 11:34:05.728422  148753 command_runner.go:130] >       "spec": null,
	I0617 11:34:05.728428  148753 command_runner.go:130] >       "pinned": true
	I0617 11:34:05.728431  148753 command_runner.go:130] >     }
	I0617 11:34:05.728436  148753 command_runner.go:130] >   ]
	I0617 11:34:05.728439  148753 command_runner.go:130] > }
	I0617 11:34:05.728569  148753 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 11:34:05.728583  148753 cache_images.go:84] Images are preloaded, skipping loading
	I0617 11:34:05.728590  148753 kubeadm.go:928] updating node { 192.168.39.17 8443 v1.30.1 crio true true} ...
	I0617 11:34:05.728696  148753 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-353869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-353869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 11:34:05.728761  148753 ssh_runner.go:195] Run: crio config
	I0617 11:34:05.760592  148753 command_runner.go:130] ! time="2024-06-17 11:34:05.740729828Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0617 11:34:05.766338  148753 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0617 11:34:05.773866  148753 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0617 11:34:05.773890  148753 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0617 11:34:05.773901  148753 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0617 11:34:05.773904  148753 command_runner.go:130] > #
	I0617 11:34:05.773910  148753 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0617 11:34:05.773919  148753 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0617 11:34:05.773925  148753 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0617 11:34:05.773932  148753 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0617 11:34:05.773937  148753 command_runner.go:130] > # reload'.
	I0617 11:34:05.773944  148753 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0617 11:34:05.773950  148753 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0617 11:34:05.773958  148753 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0617 11:34:05.773966  148753 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0617 11:34:05.773970  148753 command_runner.go:130] > [crio]
	I0617 11:34:05.773976  148753 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0617 11:34:05.773983  148753 command_runner.go:130] > # containers images, in this directory.
	I0617 11:34:05.773987  148753 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0617 11:34:05.773995  148753 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0617 11:34:05.774003  148753 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0617 11:34:05.774010  148753 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0617 11:34:05.774018  148753 command_runner.go:130] > # imagestore = ""
	I0617 11:34:05.774025  148753 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0617 11:34:05.774034  148753 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0617 11:34:05.774040  148753 command_runner.go:130] > storage_driver = "overlay"
	I0617 11:34:05.774048  148753 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0617 11:34:05.774054  148753 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0617 11:34:05.774061  148753 command_runner.go:130] > storage_option = [
	I0617 11:34:05.774065  148753 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0617 11:34:05.774071  148753 command_runner.go:130] > ]
	I0617 11:34:05.774077  148753 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0617 11:34:05.774085  148753 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0617 11:34:05.774092  148753 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0617 11:34:05.774098  148753 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0617 11:34:05.774106  148753 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0617 11:34:05.774113  148753 command_runner.go:130] > # always happen on a node reboot
	I0617 11:34:05.774117  148753 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0617 11:34:05.774127  148753 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0617 11:34:05.774135  148753 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0617 11:34:05.774140  148753 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0617 11:34:05.774147  148753 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0617 11:34:05.774154  148753 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0617 11:34:05.774164  148753 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0617 11:34:05.774170  148753 command_runner.go:130] > # internal_wipe = true
	I0617 11:34:05.774178  148753 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0617 11:34:05.774185  148753 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0617 11:34:05.774191  148753 command_runner.go:130] > # internal_repair = false
	I0617 11:34:05.774196  148753 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0617 11:34:05.774206  148753 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0617 11:34:05.774214  148753 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0617 11:34:05.774222  148753 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0617 11:34:05.774232  148753 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0617 11:34:05.774239  148753 command_runner.go:130] > [crio.api]
	I0617 11:34:05.774244  148753 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0617 11:34:05.774251  148753 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0617 11:34:05.774256  148753 command_runner.go:130] > # IP address on which the stream server will listen.
	I0617 11:34:05.774264  148753 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0617 11:34:05.774273  148753 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0617 11:34:05.774281  148753 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0617 11:34:05.774285  148753 command_runner.go:130] > # stream_port = "0"
	I0617 11:34:05.774293  148753 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0617 11:34:05.774297  148753 command_runner.go:130] > # stream_enable_tls = false
	I0617 11:34:05.774305  148753 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0617 11:34:05.774312  148753 command_runner.go:130] > # stream_idle_timeout = ""
	I0617 11:34:05.774318  148753 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0617 11:34:05.774327  148753 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0617 11:34:05.774332  148753 command_runner.go:130] > # minutes.
	I0617 11:34:05.774338  148753 command_runner.go:130] > # stream_tls_cert = ""
	I0617 11:34:05.774344  148753 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0617 11:34:05.774352  148753 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0617 11:34:05.774358  148753 command_runner.go:130] > # stream_tls_key = ""
	I0617 11:34:05.774365  148753 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0617 11:34:05.774373  148753 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0617 11:34:05.774388  148753 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0617 11:34:05.774394  148753 command_runner.go:130] > # stream_tls_ca = ""
	I0617 11:34:05.774402  148753 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0617 11:34:05.774409  148753 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0617 11:34:05.774416  148753 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0617 11:34:05.774423  148753 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0617 11:34:05.774429  148753 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0617 11:34:05.774436  148753 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0617 11:34:05.774442  148753 command_runner.go:130] > [crio.runtime]
	I0617 11:34:05.774449  148753 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0617 11:34:05.774456  148753 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0617 11:34:05.774484  148753 command_runner.go:130] > # "nofile=1024:2048"
	I0617 11:34:05.774497  148753 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0617 11:34:05.774501  148753 command_runner.go:130] > # default_ulimits = [
	I0617 11:34:05.774507  148753 command_runner.go:130] > # ]
	I0617 11:34:05.774513  148753 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0617 11:34:05.774518  148753 command_runner.go:130] > # no_pivot = false
	I0617 11:34:05.774524  148753 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0617 11:34:05.774532  148753 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0617 11:34:05.774540  148753 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0617 11:34:05.774545  148753 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0617 11:34:05.774552  148753 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0617 11:34:05.774559  148753 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0617 11:34:05.774566  148753 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0617 11:34:05.774571  148753 command_runner.go:130] > # Cgroup setting for conmon
	I0617 11:34:05.774579  148753 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0617 11:34:05.774586  148753 command_runner.go:130] > conmon_cgroup = "pod"
	I0617 11:34:05.774592  148753 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0617 11:34:05.774599  148753 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0617 11:34:05.774606  148753 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0617 11:34:05.774611  148753 command_runner.go:130] > conmon_env = [
	I0617 11:34:05.774616  148753 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0617 11:34:05.774622  148753 command_runner.go:130] > ]
	I0617 11:34:05.774627  148753 command_runner.go:130] > # Additional environment variables to set for all the
	I0617 11:34:05.774634  148753 command_runner.go:130] > # containers. These are overridden if set in the
	I0617 11:34:05.774640  148753 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0617 11:34:05.774647  148753 command_runner.go:130] > # default_env = [
	I0617 11:34:05.774650  148753 command_runner.go:130] > # ]
	I0617 11:34:05.774657  148753 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0617 11:34:05.774667  148753 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0617 11:34:05.774673  148753 command_runner.go:130] > # selinux = false
	I0617 11:34:05.774679  148753 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0617 11:34:05.774687  148753 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0617 11:34:05.774693  148753 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0617 11:34:05.774699  148753 command_runner.go:130] > # seccomp_profile = ""
	I0617 11:34:05.774705  148753 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0617 11:34:05.774711  148753 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0617 11:34:05.774720  148753 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0617 11:34:05.774726  148753 command_runner.go:130] > # which might increase security.
	I0617 11:34:05.774730  148753 command_runner.go:130] > # This option is currently deprecated,
	I0617 11:34:05.774738  148753 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0617 11:34:05.774745  148753 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0617 11:34:05.774751  148753 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0617 11:34:05.774760  148753 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0617 11:34:05.774775  148753 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0617 11:34:05.774783  148753 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0617 11:34:05.774787  148753 command_runner.go:130] > # This option supports live configuration reload.
	I0617 11:34:05.774794  148753 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0617 11:34:05.774799  148753 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0617 11:34:05.774806  148753 command_runner.go:130] > # the cgroup blockio controller.
	I0617 11:34:05.774810  148753 command_runner.go:130] > # blockio_config_file = ""
	I0617 11:34:05.774819  148753 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0617 11:34:05.774825  148753 command_runner.go:130] > # blockio parameters.
	I0617 11:34:05.774829  148753 command_runner.go:130] > # blockio_reload = false
	I0617 11:34:05.774838  148753 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0617 11:34:05.774843  148753 command_runner.go:130] > # irqbalance daemon.
	I0617 11:34:05.774848  148753 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0617 11:34:05.774856  148753 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0617 11:34:05.774865  148753 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0617 11:34:05.774874  148753 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0617 11:34:05.774881  148753 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0617 11:34:05.774887  148753 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0617 11:34:05.774894  148753 command_runner.go:130] > # This option supports live configuration reload.
	I0617 11:34:05.774898  148753 command_runner.go:130] > # rdt_config_file = ""
	I0617 11:34:05.774904  148753 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0617 11:34:05.774910  148753 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0617 11:34:05.774928  148753 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0617 11:34:05.774935  148753 command_runner.go:130] > # separate_pull_cgroup = ""
	I0617 11:34:05.774941  148753 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0617 11:34:05.774950  148753 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0617 11:34:05.774956  148753 command_runner.go:130] > # will be added.
	I0617 11:34:05.774961  148753 command_runner.go:130] > # default_capabilities = [
	I0617 11:34:05.774967  148753 command_runner.go:130] > # 	"CHOWN",
	I0617 11:34:05.774971  148753 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0617 11:34:05.774977  148753 command_runner.go:130] > # 	"FSETID",
	I0617 11:34:05.774981  148753 command_runner.go:130] > # 	"FOWNER",
	I0617 11:34:05.774987  148753 command_runner.go:130] > # 	"SETGID",
	I0617 11:34:05.774990  148753 command_runner.go:130] > # 	"SETUID",
	I0617 11:34:05.774996  148753 command_runner.go:130] > # 	"SETPCAP",
	I0617 11:34:05.775000  148753 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0617 11:34:05.775006  148753 command_runner.go:130] > # 	"KILL",
	I0617 11:34:05.775009  148753 command_runner.go:130] > # ]
	I0617 11:34:05.775016  148753 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0617 11:34:05.775025  148753 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0617 11:34:05.775032  148753 command_runner.go:130] > # add_inheritable_capabilities = false
	I0617 11:34:05.775039  148753 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0617 11:34:05.775047  148753 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0617 11:34:05.775054  148753 command_runner.go:130] > default_sysctls = [
	I0617 11:34:05.775058  148753 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0617 11:34:05.775064  148753 command_runner.go:130] > ]
	I0617 11:34:05.775069  148753 command_runner.go:130] > # List of devices on the host that a
	I0617 11:34:05.775077  148753 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0617 11:34:05.775082  148753 command_runner.go:130] > # allowed_devices = [
	I0617 11:34:05.775085  148753 command_runner.go:130] > # 	"/dev/fuse",
	I0617 11:34:05.775091  148753 command_runner.go:130] > # ]
	I0617 11:34:05.775096  148753 command_runner.go:130] > # List of additional devices. specified as
	I0617 11:34:05.775105  148753 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0617 11:34:05.775113  148753 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0617 11:34:05.775121  148753 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0617 11:34:05.775125  148753 command_runner.go:130] > # additional_devices = [
	I0617 11:34:05.775130  148753 command_runner.go:130] > # ]
	I0617 11:34:05.775135  148753 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0617 11:34:05.775141  148753 command_runner.go:130] > # cdi_spec_dirs = [
	I0617 11:34:05.775145  148753 command_runner.go:130] > # 	"/etc/cdi",
	I0617 11:34:05.775151  148753 command_runner.go:130] > # 	"/var/run/cdi",
	I0617 11:34:05.775154  148753 command_runner.go:130] > # ]
	I0617 11:34:05.775161  148753 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0617 11:34:05.775169  148753 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0617 11:34:05.775173  148753 command_runner.go:130] > # Defaults to false.
	I0617 11:34:05.775178  148753 command_runner.go:130] > # device_ownership_from_security_context = false
	I0617 11:34:05.775187  148753 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0617 11:34:05.775195  148753 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0617 11:34:05.775199  148753 command_runner.go:130] > # hooks_dir = [
	I0617 11:34:05.775204  148753 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0617 11:34:05.775209  148753 command_runner.go:130] > # ]
	I0617 11:34:05.775215  148753 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0617 11:34:05.775223  148753 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0617 11:34:05.775228  148753 command_runner.go:130] > # its default mounts from the following two files:
	I0617 11:34:05.775233  148753 command_runner.go:130] > #
	I0617 11:34:05.775239  148753 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0617 11:34:05.775247  148753 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0617 11:34:05.775253  148753 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0617 11:34:05.775258  148753 command_runner.go:130] > #
	I0617 11:34:05.775264  148753 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0617 11:34:05.775272  148753 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0617 11:34:05.775280  148753 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0617 11:34:05.775287  148753 command_runner.go:130] > #      only add mounts it finds in this file.
	I0617 11:34:05.775291  148753 command_runner.go:130] > #
	I0617 11:34:05.775295  148753 command_runner.go:130] > # default_mounts_file = ""
	I0617 11:34:05.775302  148753 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0617 11:34:05.775308  148753 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0617 11:34:05.775314  148753 command_runner.go:130] > pids_limit = 1024
	I0617 11:34:05.775320  148753 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0617 11:34:05.775328  148753 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0617 11:34:05.775337  148753 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0617 11:34:05.775347  148753 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0617 11:34:05.775353  148753 command_runner.go:130] > # log_size_max = -1
	I0617 11:34:05.775359  148753 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0617 11:34:05.775366  148753 command_runner.go:130] > # log_to_journald = false
	I0617 11:34:05.775372  148753 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0617 11:34:05.775379  148753 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0617 11:34:05.775384  148753 command_runner.go:130] > # Path to directory for container attach sockets.
	I0617 11:34:05.775391  148753 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0617 11:34:05.775396  148753 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0617 11:34:05.775402  148753 command_runner.go:130] > # bind_mount_prefix = ""
	I0617 11:34:05.775410  148753 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0617 11:34:05.775416  148753 command_runner.go:130] > # read_only = false
	I0617 11:34:05.775422  148753 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0617 11:34:05.775430  148753 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0617 11:34:05.775434  148753 command_runner.go:130] > # live configuration reload.
	I0617 11:34:05.775438  148753 command_runner.go:130] > # log_level = "info"
	I0617 11:34:05.775444  148753 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0617 11:34:05.775471  148753 command_runner.go:130] > # This option supports live configuration reload.
	I0617 11:34:05.775481  148753 command_runner.go:130] > # log_filter = ""
	I0617 11:34:05.775487  148753 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0617 11:34:05.775496  148753 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0617 11:34:05.775502  148753 command_runner.go:130] > # separated by comma.
	I0617 11:34:05.775509  148753 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0617 11:34:05.775515  148753 command_runner.go:130] > # uid_mappings = ""
	I0617 11:34:05.775521  148753 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0617 11:34:05.775528  148753 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0617 11:34:05.775535  148753 command_runner.go:130] > # separated by comma.
	I0617 11:34:05.775543  148753 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0617 11:34:05.775549  148753 command_runner.go:130] > # gid_mappings = ""
	I0617 11:34:05.775555  148753 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0617 11:34:05.775563  148753 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0617 11:34:05.775569  148753 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0617 11:34:05.775579  148753 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0617 11:34:05.775584  148753 command_runner.go:130] > # minimum_mappable_uid = -1
	I0617 11:34:05.775590  148753 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0617 11:34:05.775598  148753 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0617 11:34:05.775606  148753 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0617 11:34:05.775613  148753 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0617 11:34:05.775628  148753 command_runner.go:130] > # minimum_mappable_gid = -1
	I0617 11:34:05.775634  148753 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0617 11:34:05.775641  148753 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0617 11:34:05.775649  148753 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0617 11:34:05.775655  148753 command_runner.go:130] > # ctr_stop_timeout = 30
	I0617 11:34:05.775661  148753 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0617 11:34:05.775669  148753 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0617 11:34:05.775673  148753 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0617 11:34:05.775681  148753 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0617 11:34:05.775687  148753 command_runner.go:130] > drop_infra_ctr = false
	I0617 11:34:05.775693  148753 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0617 11:34:05.775701  148753 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0617 11:34:05.775710  148753 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0617 11:34:05.775716  148753 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0617 11:34:05.775723  148753 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0617 11:34:05.775730  148753 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0617 11:34:05.775736  148753 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0617 11:34:05.775743  148753 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0617 11:34:05.775747  148753 command_runner.go:130] > # shared_cpuset = ""
	I0617 11:34:05.775755  148753 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0617 11:34:05.775762  148753 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0617 11:34:05.775770  148753 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0617 11:34:05.775778  148753 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0617 11:34:05.775783  148753 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0617 11:34:05.775790  148753 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0617 11:34:05.775796  148753 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0617 11:34:05.775802  148753 command_runner.go:130] > # enable_criu_support = false
	I0617 11:34:05.775807  148753 command_runner.go:130] > # Enable/disable the generation of the container,
	I0617 11:34:05.775815  148753 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0617 11:34:05.775819  148753 command_runner.go:130] > # enable_pod_events = false
	I0617 11:34:05.775827  148753 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0617 11:34:05.775835  148753 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0617 11:34:05.775842  148753 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0617 11:34:05.775848  148753 command_runner.go:130] > # default_runtime = "runc"
	I0617 11:34:05.775853  148753 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0617 11:34:05.775862  148753 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0617 11:34:05.775873  148753 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0617 11:34:05.775880  148753 command_runner.go:130] > # creation as a file is not desired either.
	I0617 11:34:05.775888  148753 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0617 11:34:05.775896  148753 command_runner.go:130] > # the hostname is being managed dynamically.
	I0617 11:34:05.775902  148753 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0617 11:34:05.775905  148753 command_runner.go:130] > # ]
	I0617 11:34:05.775911  148753 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0617 11:34:05.775919  148753 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0617 11:34:05.775927  148753 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0617 11:34:05.775934  148753 command_runner.go:130] > # Each entry in the table should follow the format:
	I0617 11:34:05.775937  148753 command_runner.go:130] > #
	I0617 11:34:05.775942  148753 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0617 11:34:05.775949  148753 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0617 11:34:05.775969  148753 command_runner.go:130] > # runtime_type = "oci"
	I0617 11:34:05.775976  148753 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0617 11:34:05.775981  148753 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0617 11:34:05.775987  148753 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0617 11:34:05.775992  148753 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0617 11:34:05.775998  148753 command_runner.go:130] > # monitor_env = []
	I0617 11:34:05.776003  148753 command_runner.go:130] > # privileged_without_host_devices = false
	I0617 11:34:05.776009  148753 command_runner.go:130] > # allowed_annotations = []
	I0617 11:34:05.776014  148753 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0617 11:34:05.776020  148753 command_runner.go:130] > # Where:
	I0617 11:34:05.776025  148753 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0617 11:34:05.776033  148753 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0617 11:34:05.776042  148753 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0617 11:34:05.776050  148753 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0617 11:34:05.776056  148753 command_runner.go:130] > #   in $PATH.
	I0617 11:34:05.776062  148753 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0617 11:34:05.776070  148753 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0617 11:34:05.776078  148753 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0617 11:34:05.776084  148753 command_runner.go:130] > #   state.
	I0617 11:34:05.776090  148753 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0617 11:34:05.776099  148753 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0617 11:34:05.776108  148753 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0617 11:34:05.776115  148753 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0617 11:34:05.776123  148753 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0617 11:34:05.776129  148753 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0617 11:34:05.776137  148753 command_runner.go:130] > #   The currently recognized values are:
	I0617 11:34:05.776143  148753 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0617 11:34:05.776152  148753 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0617 11:34:05.776160  148753 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0617 11:34:05.776169  148753 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0617 11:34:05.776178  148753 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0617 11:34:05.776187  148753 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0617 11:34:05.776196  148753 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0617 11:34:05.776201  148753 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0617 11:34:05.776209  148753 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0617 11:34:05.776217  148753 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0617 11:34:05.776223  148753 command_runner.go:130] > #   deprecated option "conmon".
	I0617 11:34:05.776230  148753 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0617 11:34:05.776237  148753 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0617 11:34:05.776243  148753 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0617 11:34:05.776250  148753 command_runner.go:130] > #   should be moved to the container's cgroup
	I0617 11:34:05.776256  148753 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0617 11:34:05.776264  148753 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0617 11:34:05.776270  148753 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0617 11:34:05.776277  148753 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0617 11:34:05.776280  148753 command_runner.go:130] > #
	I0617 11:34:05.776287  148753 command_runner.go:130] > # Using the seccomp notifier feature:
	I0617 11:34:05.776290  148753 command_runner.go:130] > #
	I0617 11:34:05.776296  148753 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0617 11:34:05.776304  148753 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0617 11:34:05.776310  148753 command_runner.go:130] > #
	I0617 11:34:05.776316  148753 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0617 11:34:05.776324  148753 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0617 11:34:05.776328  148753 command_runner.go:130] > #
	I0617 11:34:05.776334  148753 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0617 11:34:05.776339  148753 command_runner.go:130] > # feature.
	I0617 11:34:05.776342  148753 command_runner.go:130] > #
	I0617 11:34:05.776350  148753 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0617 11:34:05.776356  148753 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0617 11:34:05.776364  148753 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0617 11:34:05.776373  148753 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0617 11:34:05.776379  148753 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0617 11:34:05.776382  148753 command_runner.go:130] > #
	I0617 11:34:05.776390  148753 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0617 11:34:05.776396  148753 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0617 11:34:05.776401  148753 command_runner.go:130] > #
	I0617 11:34:05.776407  148753 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0617 11:34:05.776417  148753 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0617 11:34:05.776422  148753 command_runner.go:130] > #
	I0617 11:34:05.776428  148753 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0617 11:34:05.776436  148753 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0617 11:34:05.776440  148753 command_runner.go:130] > # limitation.
	I0617 11:34:05.776445  148753 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0617 11:34:05.776451  148753 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0617 11:34:05.776455  148753 command_runner.go:130] > runtime_type = "oci"
	I0617 11:34:05.776460  148753 command_runner.go:130] > runtime_root = "/run/runc"
	I0617 11:34:05.776464  148753 command_runner.go:130] > runtime_config_path = ""
	I0617 11:34:05.776468  148753 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0617 11:34:05.776475  148753 command_runner.go:130] > monitor_cgroup = "pod"
	I0617 11:34:05.776479  148753 command_runner.go:130] > monitor_exec_cgroup = ""
	I0617 11:34:05.776485  148753 command_runner.go:130] > monitor_env = [
	I0617 11:34:05.776491  148753 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0617 11:34:05.776496  148753 command_runner.go:130] > ]
	I0617 11:34:05.776501  148753 command_runner.go:130] > privileged_without_host_devices = false
	I0617 11:34:05.776509  148753 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0617 11:34:05.776517  148753 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0617 11:34:05.776525  148753 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0617 11:34:05.776533  148753 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0617 11:34:05.776542  148753 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0617 11:34:05.776550  148753 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0617 11:34:05.776558  148753 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0617 11:34:05.776567  148753 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0617 11:34:05.776573  148753 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0617 11:34:05.776580  148753 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0617 11:34:05.776583  148753 command_runner.go:130] > # Example:
	I0617 11:34:05.776588  148753 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0617 11:34:05.776592  148753 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0617 11:34:05.776596  148753 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0617 11:34:05.776601  148753 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0617 11:34:05.776604  148753 command_runner.go:130] > # cpuset = 0
	I0617 11:34:05.776607  148753 command_runner.go:130] > # cpushares = "0-1"
	I0617 11:34:05.776610  148753 command_runner.go:130] > # Where:
	I0617 11:34:05.776615  148753 command_runner.go:130] > # The workload name is workload-type.
	I0617 11:34:05.776621  148753 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0617 11:34:05.776627  148753 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0617 11:34:05.776632  148753 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0617 11:34:05.776639  148753 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0617 11:34:05.776646  148753 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0617 11:34:05.776651  148753 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0617 11:34:05.776660  148753 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0617 11:34:05.776666  148753 command_runner.go:130] > # Default value is set to true
	I0617 11:34:05.776670  148753 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0617 11:34:05.776676  148753 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0617 11:34:05.776683  148753 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0617 11:34:05.776687  148753 command_runner.go:130] > # Default value is set to 'false'
	I0617 11:34:05.776695  148753 command_runner.go:130] > # disable_hostport_mapping = false
	I0617 11:34:05.776701  148753 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0617 11:34:05.776706  148753 command_runner.go:130] > #
	I0617 11:34:05.776712  148753 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0617 11:34:05.776720  148753 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0617 11:34:05.776729  148753 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0617 11:34:05.776737  148753 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0617 11:34:05.776745  148753 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0617 11:34:05.776751  148753 command_runner.go:130] > [crio.image]
	I0617 11:34:05.776756  148753 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0617 11:34:05.776762  148753 command_runner.go:130] > # default_transport = "docker://"
	I0617 11:34:05.776772  148753 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0617 11:34:05.776780  148753 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0617 11:34:05.776784  148753 command_runner.go:130] > # global_auth_file = ""
	I0617 11:34:05.776791  148753 command_runner.go:130] > # The image used to instantiate infra containers.
	I0617 11:34:05.776796  148753 command_runner.go:130] > # This option supports live configuration reload.
	I0617 11:34:05.776803  148753 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0617 11:34:05.776825  148753 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0617 11:34:05.776836  148753 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0617 11:34:05.776843  148753 command_runner.go:130] > # This option supports live configuration reload.
	I0617 11:34:05.776848  148753 command_runner.go:130] > # pause_image_auth_file = ""
	I0617 11:34:05.776856  148753 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0617 11:34:05.776864  148753 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0617 11:34:05.776872  148753 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0617 11:34:05.776882  148753 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0617 11:34:05.776886  148753 command_runner.go:130] > # pause_command = "/pause"
	I0617 11:34:05.776894  148753 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0617 11:34:05.776902  148753 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0617 11:34:05.776910  148753 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0617 11:34:05.776919  148753 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0617 11:34:05.776927  148753 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0617 11:34:05.776935  148753 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0617 11:34:05.776942  148753 command_runner.go:130] > # pinned_images = [
	I0617 11:34:05.776945  148753 command_runner.go:130] > # ]
	I0617 11:34:05.776953  148753 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0617 11:34:05.776961  148753 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0617 11:34:05.776969  148753 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0617 11:34:05.776977  148753 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0617 11:34:05.776984  148753 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0617 11:34:05.776990  148753 command_runner.go:130] > # signature_policy = ""
	I0617 11:34:05.776995  148753 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0617 11:34:05.777003  148753 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0617 11:34:05.777011  148753 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0617 11:34:05.777020  148753 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0617 11:34:05.777026  148753 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0617 11:34:05.777033  148753 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0617 11:34:05.777039  148753 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0617 11:34:05.777047  148753 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0617 11:34:05.777050  148753 command_runner.go:130] > # changing them here.
	I0617 11:34:05.777057  148753 command_runner.go:130] > # insecure_registries = [
	I0617 11:34:05.777060  148753 command_runner.go:130] > # ]
	I0617 11:34:05.777069  148753 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0617 11:34:05.777076  148753 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0617 11:34:05.777080  148753 command_runner.go:130] > # image_volumes = "mkdir"
	I0617 11:34:05.777087  148753 command_runner.go:130] > # Temporary directory to use for storing big files
	I0617 11:34:05.777092  148753 command_runner.go:130] > # big_files_temporary_dir = ""
	I0617 11:34:05.777100  148753 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0617 11:34:05.777106  148753 command_runner.go:130] > # CNI plugins.
	I0617 11:34:05.777110  148753 command_runner.go:130] > [crio.network]
	I0617 11:34:05.777119  148753 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0617 11:34:05.777127  148753 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0617 11:34:05.777133  148753 command_runner.go:130] > # cni_default_network = ""
	I0617 11:34:05.777139  148753 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0617 11:34:05.777146  148753 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0617 11:34:05.777151  148753 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0617 11:34:05.777157  148753 command_runner.go:130] > # plugin_dirs = [
	I0617 11:34:05.777161  148753 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0617 11:34:05.777166  148753 command_runner.go:130] > # ]
	I0617 11:34:05.777172  148753 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0617 11:34:05.777177  148753 command_runner.go:130] > [crio.metrics]
	I0617 11:34:05.777182  148753 command_runner.go:130] > # Globally enable or disable metrics support.
	I0617 11:34:05.777188  148753 command_runner.go:130] > enable_metrics = true
	I0617 11:34:05.777193  148753 command_runner.go:130] > # Specify enabled metrics collectors.
	I0617 11:34:05.777199  148753 command_runner.go:130] > # Per default all metrics are enabled.
	I0617 11:34:05.777205  148753 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0617 11:34:05.777213  148753 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0617 11:34:05.777222  148753 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0617 11:34:05.777228  148753 command_runner.go:130] > # metrics_collectors = [
	I0617 11:34:05.777232  148753 command_runner.go:130] > # 	"operations",
	I0617 11:34:05.777238  148753 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0617 11:34:05.777243  148753 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0617 11:34:05.777249  148753 command_runner.go:130] > # 	"operations_errors",
	I0617 11:34:05.777253  148753 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0617 11:34:05.777260  148753 command_runner.go:130] > # 	"image_pulls_by_name",
	I0617 11:34:05.777264  148753 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0617 11:34:05.777270  148753 command_runner.go:130] > # 	"image_pulls_failures",
	I0617 11:34:05.777275  148753 command_runner.go:130] > # 	"image_pulls_successes",
	I0617 11:34:05.777281  148753 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0617 11:34:05.777286  148753 command_runner.go:130] > # 	"image_layer_reuse",
	I0617 11:34:05.777292  148753 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0617 11:34:05.777296  148753 command_runner.go:130] > # 	"containers_oom_total",
	I0617 11:34:05.777303  148753 command_runner.go:130] > # 	"containers_oom",
	I0617 11:34:05.777307  148753 command_runner.go:130] > # 	"processes_defunct",
	I0617 11:34:05.777313  148753 command_runner.go:130] > # 	"operations_total",
	I0617 11:34:05.777317  148753 command_runner.go:130] > # 	"operations_latency_seconds",
	I0617 11:34:05.777324  148753 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0617 11:34:05.777329  148753 command_runner.go:130] > # 	"operations_errors_total",
	I0617 11:34:05.777336  148753 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0617 11:34:05.777340  148753 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0617 11:34:05.777347  148753 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0617 11:34:05.777352  148753 command_runner.go:130] > # 	"image_pulls_success_total",
	I0617 11:34:05.777358  148753 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0617 11:34:05.777362  148753 command_runner.go:130] > # 	"containers_oom_count_total",
	I0617 11:34:05.777369  148753 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0617 11:34:05.777373  148753 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0617 11:34:05.777377  148753 command_runner.go:130] > # ]
	I0617 11:34:05.777383  148753 command_runner.go:130] > # The port on which the metrics server will listen.
	I0617 11:34:05.777388  148753 command_runner.go:130] > # metrics_port = 9090
	I0617 11:34:05.777392  148753 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0617 11:34:05.777399  148753 command_runner.go:130] > # metrics_socket = ""
	I0617 11:34:05.777404  148753 command_runner.go:130] > # The certificate for the secure metrics server.
	I0617 11:34:05.777412  148753 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0617 11:34:05.777420  148753 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0617 11:34:05.777427  148753 command_runner.go:130] > # certificate on any modification event.
	I0617 11:34:05.777431  148753 command_runner.go:130] > # metrics_cert = ""
	I0617 11:34:05.777438  148753 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0617 11:34:05.777443  148753 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0617 11:34:05.777449  148753 command_runner.go:130] > # metrics_key = ""
	I0617 11:34:05.777454  148753 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0617 11:34:05.777460  148753 command_runner.go:130] > [crio.tracing]
	I0617 11:34:05.777465  148753 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0617 11:34:05.777469  148753 command_runner.go:130] > # enable_tracing = false
	I0617 11:34:05.777474  148753 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0617 11:34:05.777481  148753 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0617 11:34:05.777487  148753 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0617 11:34:05.777494  148753 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0617 11:34:05.777498  148753 command_runner.go:130] > # CRI-O NRI configuration.
	I0617 11:34:05.777504  148753 command_runner.go:130] > [crio.nri]
	I0617 11:34:05.777508  148753 command_runner.go:130] > # Globally enable or disable NRI.
	I0617 11:34:05.777514  148753 command_runner.go:130] > # enable_nri = false
	I0617 11:34:05.777518  148753 command_runner.go:130] > # NRI socket to listen on.
	I0617 11:34:05.777525  148753 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0617 11:34:05.777530  148753 command_runner.go:130] > # NRI plugin directory to use.
	I0617 11:34:05.777537  148753 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0617 11:34:05.777542  148753 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0617 11:34:05.777548  148753 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0617 11:34:05.777554  148753 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0617 11:34:05.777560  148753 command_runner.go:130] > # nri_disable_connections = false
	I0617 11:34:05.777565  148753 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0617 11:34:05.777572  148753 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0617 11:34:05.777577  148753 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0617 11:34:05.777584  148753 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0617 11:34:05.777589  148753 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0617 11:34:05.777595  148753 command_runner.go:130] > [crio.stats]
	I0617 11:34:05.777601  148753 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0617 11:34:05.777609  148753 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0617 11:34:05.777615  148753 command_runner.go:130] > # stats_collection_period = 0
	I0617 11:34:05.777755  148753 cni.go:84] Creating CNI manager for ""
	I0617 11:34:05.777772  148753 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0617 11:34:05.777784  148753 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 11:34:05.777810  148753 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.17 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-353869 NodeName:multinode-353869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 11:34:05.777939  148753 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-353869"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 11:34:05.778001  148753 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 11:34:05.788386  148753 command_runner.go:130] > kubeadm
	I0617 11:34:05.788403  148753 command_runner.go:130] > kubectl
	I0617 11:34:05.788408  148753 command_runner.go:130] > kubelet
	I0617 11:34:05.788765  148753 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 11:34:05.788814  148753 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 11:34:05.798341  148753 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0617 11:34:05.814559  148753 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 11:34:05.830635  148753 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0617 11:34:05.846833  148753 ssh_runner.go:195] Run: grep 192.168.39.17	control-plane.minikube.internal$ /etc/hosts
	I0617 11:34:05.850680  148753 command_runner.go:130] > 192.168.39.17	control-plane.minikube.internal
	I0617 11:34:05.850755  148753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:34:05.987496  148753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:34:06.002427  148753 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869 for IP: 192.168.39.17
	I0617 11:34:06.002448  148753 certs.go:194] generating shared ca certs ...
	I0617 11:34:06.002474  148753 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:34:06.002644  148753 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 11:34:06.002680  148753 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 11:34:06.002689  148753 certs.go:256] generating profile certs ...
	I0617 11:34:06.002765  148753 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/client.key
	I0617 11:34:06.002821  148753 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/apiserver.key.ffe5146b
	I0617 11:34:06.002853  148753 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/proxy-client.key
	I0617 11:34:06.002865  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0617 11:34:06.002876  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0617 11:34:06.002889  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0617 11:34:06.002899  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0617 11:34:06.002910  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0617 11:34:06.002923  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0617 11:34:06.002935  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0617 11:34:06.002945  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0617 11:34:06.002993  148753 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 11:34:06.003018  148753 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 11:34:06.003028  148753 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 11:34:06.003055  148753 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 11:34:06.003077  148753 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 11:34:06.003097  148753 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 11:34:06.003136  148753 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:34:06.003160  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> /usr/share/ca-certificates/1201742.pem
	I0617 11:34:06.003173  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:34:06.003184  148753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem -> /usr/share/ca-certificates/120174.pem
	I0617 11:34:06.003801  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 11:34:06.029586  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 11:34:06.053181  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 11:34:06.077965  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 11:34:06.101778  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0617 11:34:06.125114  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0617 11:34:06.150937  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 11:34:06.174235  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/multinode-353869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 11:34:06.197748  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 11:34:06.221325  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 11:34:06.244293  148753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 11:34:06.267533  148753 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 11:34:06.283878  148753 ssh_runner.go:195] Run: openssl version
	I0617 11:34:06.289469  148753 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0617 11:34:06.289649  148753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 11:34:06.300994  148753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 11:34:06.305282  148753 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 11:34:06.305485  148753 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 11:34:06.305521  148753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 11:34:06.310965  148753 command_runner.go:130] > 3ec20f2e
	I0617 11:34:06.311028  148753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 11:34:06.320734  148753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 11:34:06.330902  148753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:34:06.335130  148753 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:34:06.335234  148753 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:34:06.335276  148753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:34:06.340720  148753 command_runner.go:130] > b5213941
	I0617 11:34:06.340798  148753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 11:34:06.349388  148753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 11:34:06.359297  148753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 11:34:06.363605  148753 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 11:34:06.363673  148753 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 11:34:06.363717  148753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 11:34:06.393186  148753 command_runner.go:130] > 51391683
	I0617 11:34:06.393567  148753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 11:34:06.403148  148753 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 11:34:06.407533  148753 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 11:34:06.407551  148753 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0617 11:34:06.407557  148753 command_runner.go:130] > Device: 253,1	Inode: 6292502     Links: 1
	I0617 11:34:06.407563  148753 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0617 11:34:06.407569  148753 command_runner.go:130] > Access: 2024-06-17 11:27:56.110986677 +0000
	I0617 11:34:06.407573  148753 command_runner.go:130] > Modify: 2024-06-17 11:27:56.110986677 +0000
	I0617 11:34:06.407578  148753 command_runner.go:130] > Change: 2024-06-17 11:27:56.110986677 +0000
	I0617 11:34:06.407583  148753 command_runner.go:130] >  Birth: 2024-06-17 11:27:56.110986677 +0000
	I0617 11:34:06.407623  148753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 11:34:06.413188  148753 command_runner.go:130] > Certificate will not expire
	I0617 11:34:06.413244  148753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 11:34:06.419035  148753 command_runner.go:130] > Certificate will not expire
	I0617 11:34:06.419074  148753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 11:34:06.424464  148753 command_runner.go:130] > Certificate will not expire
	I0617 11:34:06.424512  148753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 11:34:06.429903  148753 command_runner.go:130] > Certificate will not expire
	I0617 11:34:06.429977  148753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 11:34:06.435144  148753 command_runner.go:130] > Certificate will not expire
	I0617 11:34:06.435346  148753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 11:34:06.440691  148753 command_runner.go:130] > Certificate will not expire
	I0617 11:34:06.440750  148753 kubeadm.go:391] StartCluster: {Name:multinode-353869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-353869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.46 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:34:06.440891  148753 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 11:34:06.440957  148753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 11:34:06.478235  148753 command_runner.go:130] > bb5cdb2e77c18dfb4033f073b3fcc0409800a764db18e2e93eac517885f5dbe4
	I0617 11:34:06.478261  148753 command_runner.go:130] > c1209b62c2e74e6bdf46e660a5944ebf4572603fe1e3f6125bd6533f824858fb
	I0617 11:34:06.478266  148753 command_runner.go:130] > f01b6f8d67c6a06c273316e91a016f1dda9bccd08a3b9f130e3fa18000e3f918
	I0617 11:34:06.478273  148753 command_runner.go:130] > 788f3e95f1389861634b7c167ecc4ed0481a5b23af544e031699d17b73670fc8
	I0617 11:34:06.478278  148753 command_runner.go:130] > e2daedb04756afc271789d6e861aa2906d06a65ced85f3593810d3b7c83242b7
	I0617 11:34:06.478284  148753 command_runner.go:130] > cf374fea65b02f5ed17deacbbfaa890808652f70898fb22613a2aada2d9d182d
	I0617 11:34:06.478289  148753 command_runner.go:130] > 5ab681386325c039d54197059416078c59182aa87b148cd254a9ab95e67be20e
	I0617 11:34:06.478297  148753 command_runner.go:130] > 920ea6bfb6321ca417761a4aacfc34eca33f282901baef10e5ab4e211b318908
	I0617 11:34:06.479768  148753 cri.go:89] found id: "bb5cdb2e77c18dfb4033f073b3fcc0409800a764db18e2e93eac517885f5dbe4"
	I0617 11:34:06.479793  148753 cri.go:89] found id: "c1209b62c2e74e6bdf46e660a5944ebf4572603fe1e3f6125bd6533f824858fb"
	I0617 11:34:06.479799  148753 cri.go:89] found id: "f01b6f8d67c6a06c273316e91a016f1dda9bccd08a3b9f130e3fa18000e3f918"
	I0617 11:34:06.479826  148753 cri.go:89] found id: "788f3e95f1389861634b7c167ecc4ed0481a5b23af544e031699d17b73670fc8"
	I0617 11:34:06.479835  148753 cri.go:89] found id: "e2daedb04756afc271789d6e861aa2906d06a65ced85f3593810d3b7c83242b7"
	I0617 11:34:06.479840  148753 cri.go:89] found id: "cf374fea65b02f5ed17deacbbfaa890808652f70898fb22613a2aada2d9d182d"
	I0617 11:34:06.479844  148753 cri.go:89] found id: "5ab681386325c039d54197059416078c59182aa87b148cd254a9ab95e67be20e"
	I0617 11:34:06.479848  148753 cri.go:89] found id: "920ea6bfb6321ca417761a4aacfc34eca33f282901baef10e5ab4e211b318908"
	I0617 11:34:06.479868  148753 cri.go:89] found id: ""
	I0617 11:34:06.479923  148753 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.808460935Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c286b6ab5ba224c15b8257108630c758e3d39676f06578be2a885368f5e5ce11,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-9q9xp,Uid:3b3438b1-3078-4c3d-918d-7ca302c631df,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718624086939364028,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-9q9xp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b3438b1-3078-4c3d-918d-7ca302c631df,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-17T11:34:12.824570329Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:09449e67a955075eb2ae95ddfe689cf65d001ce3e0d00d9910b566163db37637,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-v7jgc,Uid:6c7ab078-568f-4d93-a744-f6abffe8e025,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1718624053217143362,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-v7jgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ab078-568f-4d93-a744-f6abffe8e025,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-17T11:34:12.824575608Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:79baefacd0e097350fc4e7414620cacf414d39e32cd1f6262192dca1802dec04,Metadata:&PodSandboxMetadata{Name:kube-proxy-lh4bq,Uid:ad51975b-c6bc-4708-8988-004224379e4e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718624053188986858,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-lh4bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad51975b-c6bc-4708-8988-004224379e4e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{
kubernetes.io/config.seen: 2024-06-17T11:34:12.824601633Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0be7c33d595ba5e06bec4b3dbc4e40b8d1e5971da79c0d0c74096391b17cf465,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718624053177444499,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"
/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-17T11:34:12.824604215Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:46330c87989ecc2043660078c62b2d9e6d3e75daaa8bbbd1a068fc5abb4a36cd,Metadata:&PodSandboxMetadata{Name:kindnet-8b72m,Uid:f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718624053174810496,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-8b72m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,k8s-app: kindnet,pod-template-generat
ion: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-17T11:34:12.824596963Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c61391d3fb7623c31e783d7faa3c91d1379b9d6bdbd9ce72dcee6600422fb8ac,Metadata:&PodSandboxMetadata{Name:etcd-multinode-353869,Uid:74e7e071f006daa8b88c2389f822775e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718624048318185805,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7e071f006daa8b88c2389f822775e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.17:2379,kubernetes.io/config.hash: 74e7e071f006daa8b88c2389f822775e,kubernetes.io/config.seen: 2024-06-17T11:34:07.817969501Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0b42b8b7168ef0d5bd5cf5c756f8139d59eb896f84e1efefda6582eefc09b322,Metadat
a:&PodSandboxMetadata{Name:kube-scheduler-multinode-353869,Uid:03547e44edb5abf39796dbbc604ea57d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718624048302812349,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03547e44edb5abf39796dbbc604ea57d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 03547e44edb5abf39796dbbc604ea57d,kubernetes.io/config.seen: 2024-06-17T11:34:07.817975673Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:629147fd888d740082b57e5ebe69088e9078bfd764cf7c73bf14b94a8f0f1667,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-353869,Uid:e4c9fbf3605584b11404e4c74d684666,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718624048297147584,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernete
s.pod.name: kube-controller-manager-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c9fbf3605584b11404e4c74d684666,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e4c9fbf3605584b11404e4c74d684666,kubernetes.io/config.seen: 2024-06-17T11:34:07.817974774Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:307f2884cb6d3ac7e70a5f2dc45ce3540923e4152f2d914a18827464c407ecf4,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-353869,Uid:ccdb7a72133ec1402678e9ea7bf51f8d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718624048286105396,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccdb7a72133ec1402678e9ea7bf51f8d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.17:8443,kubernete
s.io/config.hash: ccdb7a72133ec1402678e9ea7bf51f8d,kubernetes.io/config.seen: 2024-06-17T11:34:07.817973568Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2879eb5662c5dc9c90ae5eb1b5e4280cb1f5af7eec47cc04f68c4b60065364fd,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-9q9xp,Uid:3b3438b1-3078-4c3d-918d-7ca302c631df,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718623747003009255,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-9q9xp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b3438b1-3078-4c3d-918d-7ca302c631df,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-17T11:29:06.691185438Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e36a7741cda4f83965573401b89d4d9c35ae2374a327ab7c64df3c89765bc519,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,Namespace:kube-system,Attempt:0,},S
tate:SANDBOX_NOTREADY,CreatedAt:1718623703847208134,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\"
:\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-17T11:28:23.540510456Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fd7b9c01c57a4c7e59d710d65496d12404568692cb323860531374a8f9576c2a,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-v7jgc,Uid:6c7ab078-568f-4d93-a744-f6abffe8e025,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718623703846279011,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-v7jgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ab078-568f-4d93-a744-f6abffe8e025,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-17T11:28:23.534916119Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c634101d4334a5e4e7060cc4fa47040a6b8fdbc4a9f0317d22511ddd39517dc7,Metadata:&PodSandboxMetadata{Name:kube-proxy-lh4bq,Uid:ad51975b-c6bc-4708-8988-004224379e4e,Namespace:kube-sy
stem,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718623700760186257,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-lh4bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad51975b-c6bc-4708-8988-004224379e4e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-17T11:28:19.843609003Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:42e410b99e9861aaf45f17d31f858cb722e68467cb0591c4a84f8b1158560219,Metadata:&PodSandboxMetadata{Name:kindnet-8b72m,Uid:f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718623700721166790,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-8b72m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,k8s-app: kindnet,pod-t
emplate-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-17T11:28:19.812831372Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3e6c167dc3d5a7ec1a743469227888109903a3f7c6396ffbb469dd96425fc126,Metadata:&PodSandboxMetadata{Name:etcd-multinode-353869,Uid:74e7e071f006daa8b88c2389f822775e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718623679540328631,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7e071f006daa8b88c2389f822775e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.17:2379,kubernetes.io/config.hash: 74e7e071f006daa8b88c2389f822775e,kubernetes.io/config.seen: 2024-06-17T11:27:58.859443461Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:02ebcc677be0b84fdb13e9bee3c75491f5a9fce544fd304b98324e
7555ca4a39,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-353869,Uid:e4c9fbf3605584b11404e4c74d684666,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718623679539046621,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c9fbf3605584b11404e4c74d684666,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e4c9fbf3605584b11404e4c74d684666,kubernetes.io/config.seen: 2024-06-17T11:27:58.859445943Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:060b3c7b4cc6c8a64d6da3ecf010956b4746bc8d16b153949312db9fa7aa845b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-353869,Uid:03547e44edb5abf39796dbbc604ea57d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718623679530052041,Labels:map[string]string{component: kube-scheduler,io.kubernetes
.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03547e44edb5abf39796dbbc604ea57d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 03547e44edb5abf39796dbbc604ea57d,kubernetes.io/config.seen: 2024-06-17T11:27:58.859439874Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4a878aa3e3733782505f0a73f31dc066754029efb7cfd43f8dee63152afaed15,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-353869,Uid:ccdb7a72133ec1402678e9ea7bf51f8d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718623679522996655,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccdb7a72133ec1402678e9ea7bf51f8d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 1
92.168.39.17:8443,kubernetes.io/config.hash: ccdb7a72133ec1402678e9ea7bf51f8d,kubernetes.io/config.seen: 2024-06-17T11:27:58.859444621Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2ff945f2-20bd-44ac-a0da-f50ad1d12486 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.809310013Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab2f8cc8-63e5-4b4a-97ce-f42e5f2f6146 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.809362744Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab2f8cc8-63e5-4b4a-97ce-f42e5f2f6146 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.809941077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bf62a6b9c5460fc7170bfcabc3b8873429afdc9358e70ed2c0cfc8e13b2909a,PodSandboxId:c286b6ab5ba224c15b8257108630c758e3d39676f06578be2a885368f5e5ce11,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718624087083315856,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9q9xp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b3438b1-3078-4c3d-918d-7ca302c631df,},Annotations:map[string]string{io.kubernetes.container.hash: 1f994b5a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99311a5f2af018094cefffa1d06ab60bd7f9c78720ef0903446410b62777ab1,PodSandboxId:46330c87989ecc2043660078c62b2d9e6d3e75daaa8bbbd1a068fc5abb4a36cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718624053565286943,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8b72m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2ede9293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9296d7496cc7bd08e1aae16d5835d95d67a137b8155cc6ba963ea9ecee410394,PodSandboxId:09449e67a955075eb2ae95ddfe689cf65d001ce3e0d00d9910b566163db37637,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718624053461613631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v7jgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ab078-568f-4d93-a744-f6abffe8e025,},Annotations:map[string]string{io.kubernetes.container.hash: 6fd799b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e4df51e0870da34508fa6131d228646bb0e4b6f39ea875e4cfc0bab53523821,PodSandboxId:79baefacd0e097350fc4e7414620cacf414d39e32cd1f6262192dca1802dec04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718624053376189920,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh4bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad51975b-c6bc-4708-8988-004224379e4e,},Annotations:map[string]
string{io.kubernetes.container.hash: da5fef91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b63dfaed3cb7550393f5b31b562d7204d27fe2679022292b85d9af81fe12da,PodSandboxId:0be7c33d595ba5e06bec4b3dbc4e40b8d1e5971da79c0d0c74096391b17cf465,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718624053413045791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 647d8c6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49d345565617207048c355eb2fd02d84dc1e79374c65582908fc5c31efb6ace2,PodSandboxId:c61391d3fb7623c31e783d7faa3c91d1379b9d6bdbd9ce72dcee6600422fb8ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718624048549499708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7e071f006daa8b88c2389f822775e,},Annotations:map[string]string{io.kubernetes.container.hash: 376600c6,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96338c1781a120bd164d0cf1ee12bf47c1e4614d990ecef15f2019ec1d01a74,PodSandboxId:0b42b8b7168ef0d5bd5cf5c756f8139d59eb896f84e1efefda6582eefc09b322,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718624048560235961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03547e44edb5abf39796dbbc604ea57d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5521b788f9e29eacd3cdb54d74dda1ede012f42edf74635592934f0b5fd94be,PodSandboxId:629147fd888d740082b57e5ebe69088e9078bfd764cf7c73bf14b94a8f0f1667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718624048555574662,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c9fbf3605584b11404e4c74d684666,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa049dc2107d59ff0e82cf0a7a6b0a809afe251d9199dc55b0ba7a182e31ea78,PodSandboxId:307f2884cb6d3ac7e70a5f2dc45ce3540923e4152f2d914a18827464c407ecf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718624048457260321,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccdb7a72133ec1402678e9ea7bf51f8d,},Annotations:map[string]string{io.kubernetes.container.hash: 64cb03a4,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db87229d8ad6756e3a7db9952290dffb752b7dbe5563ef38ce3ec63e639e87b8,PodSandboxId:2879eb5662c5dc9c90ae5eb1b5e4280cb1f5af7eec47cc04f68c4b60065364fd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718623748160187560,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9q9xp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b3438b1-3078-4c3d-918d-7ca302c631df,},Annotations:map[string]string{io.kubernetes.container.hash: 1f994b5a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1209b62c2e74e6bdf46e660a5944ebf4572603fe1e3f6125bd6533f824858fb,PodSandboxId:e36a7741cda4f83965573401b89d4d9c35ae2374a327ab7c64df3c89765bc519,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718623704020718026,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,},Annotations:map[string]string{io.kubernetes.container.hash: 647d8c6e,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5cdb2e77c18dfb4033f073b3fcc0409800a764db18e2e93eac517885f5dbe4,PodSandboxId:fd7b9c01c57a4c7e59d710d65496d12404568692cb323860531374a8f9576c2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718623704047491037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v7jgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ab078-568f-4d93-a744-f6abffe8e025,},Annotations:map[string]string{io.kubernetes.container.hash: 6fd799b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f01b6f8d67c6a06c273316e91a016f1dda9bccd08a3b9f130e3fa18000e3f918,PodSandboxId:42e410b99e9861aaf45f17d31f858cb722e68467cb0591c4a84f8b1158560219,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718623702755119502,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8b72m,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2ede9293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:788f3e95f1389861634b7c167ecc4ed0481a5b23af544e031699d17b73670fc8,PodSandboxId:c634101d4334a5e4e7060cc4fa47040a6b8fdbc4a9f0317d22511ddd39517dc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718623700878267082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh4bq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ad51975b-c6bc-4708-8988-004224379e4e,},Annotations:map[string]string{io.kubernetes.container.hash: da5fef91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2daedb04756afc271789d6e861aa2906d06a65ced85f3593810d3b7c83242b7,PodSandboxId:3e6c167dc3d5a7ec1a743469227888109903a3f7c6396ffbb469dd96425fc126,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718623679799019295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7e071f006daa8b88c2389f82277
5e,},Annotations:map[string]string{io.kubernetes.container.hash: 376600c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab681386325c039d54197059416078c59182aa87b148cd254a9ab95e67be20e,PodSandboxId:060b3c7b4cc6c8a64d6da3ecf010956b4746bc8d16b153949312db9fa7aa845b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718623679768119203,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03547e44edb5abf39796dbbc604ea57d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf374fea65b02f5ed17deacbbfaa890808652f70898fb22613a2aada2d9d182d,PodSandboxId:02ebcc677be0b84fdb13e9bee3c75491f5a9fce544fd304b98324e7555ca4a39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718623679769989035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c9fbf3605584b11404e4c74d684666,
},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920ea6bfb6321ca417761a4aacfc34eca33f282901baef10e5ab4e211b318908,PodSandboxId:4a878aa3e3733782505f0a73f31dc066754029efb7cfd43f8dee63152afaed15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718623679688156743,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccdb7a72133ec1402678e9ea7bf51f8d,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 64cb03a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab2f8cc8-63e5-4b4a-97ce-f42e5f2f6146 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.811291935Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=951477d3-e3a2-484c-9eae-c8dcad708376 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.811341636Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=951477d3-e3a2-484c-9eae-c8dcad708376 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.812379725Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93acb46e-836e-407a-9734-41151a97bd0c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.812979082Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624270812952016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93acb46e-836e-407a-9734-41151a97bd0c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.813412632Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1086deb4-80e5-4466-89f1-e88bd1038abb name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.813457972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1086deb4-80e5-4466-89f1-e88bd1038abb name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.813782242Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bf62a6b9c5460fc7170bfcabc3b8873429afdc9358e70ed2c0cfc8e13b2909a,PodSandboxId:c286b6ab5ba224c15b8257108630c758e3d39676f06578be2a885368f5e5ce11,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718624087083315856,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9q9xp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b3438b1-3078-4c3d-918d-7ca302c631df,},Annotations:map[string]string{io.kubernetes.container.hash: 1f994b5a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99311a5f2af018094cefffa1d06ab60bd7f9c78720ef0903446410b62777ab1,PodSandboxId:46330c87989ecc2043660078c62b2d9e6d3e75daaa8bbbd1a068fc5abb4a36cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718624053565286943,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8b72m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2ede9293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9296d7496cc7bd08e1aae16d5835d95d67a137b8155cc6ba963ea9ecee410394,PodSandboxId:09449e67a955075eb2ae95ddfe689cf65d001ce3e0d00d9910b566163db37637,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718624053461613631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v7jgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ab078-568f-4d93-a744-f6abffe8e025,},Annotations:map[string]string{io.kubernetes.container.hash: 6fd799b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e4df51e0870da34508fa6131d228646bb0e4b6f39ea875e4cfc0bab53523821,PodSandboxId:79baefacd0e097350fc4e7414620cacf414d39e32cd1f6262192dca1802dec04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718624053376189920,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh4bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad51975b-c6bc-4708-8988-004224379e4e,},Annotations:map[string]
string{io.kubernetes.container.hash: da5fef91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b63dfaed3cb7550393f5b31b562d7204d27fe2679022292b85d9af81fe12da,PodSandboxId:0be7c33d595ba5e06bec4b3dbc4e40b8d1e5971da79c0d0c74096391b17cf465,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718624053413045791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 647d8c6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49d345565617207048c355eb2fd02d84dc1e79374c65582908fc5c31efb6ace2,PodSandboxId:c61391d3fb7623c31e783d7faa3c91d1379b9d6bdbd9ce72dcee6600422fb8ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718624048549499708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7e071f006daa8b88c2389f822775e,},Annotations:map[string]string{io.kubernetes.container.hash: 376600c6,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96338c1781a120bd164d0cf1ee12bf47c1e4614d990ecef15f2019ec1d01a74,PodSandboxId:0b42b8b7168ef0d5bd5cf5c756f8139d59eb896f84e1efefda6582eefc09b322,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718624048560235961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03547e44edb5abf39796dbbc604ea57d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5521b788f9e29eacd3cdb54d74dda1ede012f42edf74635592934f0b5fd94be,PodSandboxId:629147fd888d740082b57e5ebe69088e9078bfd764cf7c73bf14b94a8f0f1667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718624048555574662,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c9fbf3605584b11404e4c74d684666,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa049dc2107d59ff0e82cf0a7a6b0a809afe251d9199dc55b0ba7a182e31ea78,PodSandboxId:307f2884cb6d3ac7e70a5f2dc45ce3540923e4152f2d914a18827464c407ecf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718624048457260321,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccdb7a72133ec1402678e9ea7bf51f8d,},Annotations:map[string]string{io.kubernetes.container.hash: 64cb03a4,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db87229d8ad6756e3a7db9952290dffb752b7dbe5563ef38ce3ec63e639e87b8,PodSandboxId:2879eb5662c5dc9c90ae5eb1b5e4280cb1f5af7eec47cc04f68c4b60065364fd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718623748160187560,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9q9xp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b3438b1-3078-4c3d-918d-7ca302c631df,},Annotations:map[string]string{io.kubernetes.container.hash: 1f994b5a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1209b62c2e74e6bdf46e660a5944ebf4572603fe1e3f6125bd6533f824858fb,PodSandboxId:e36a7741cda4f83965573401b89d4d9c35ae2374a327ab7c64df3c89765bc519,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718623704020718026,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,},Annotations:map[string]string{io.kubernetes.container.hash: 647d8c6e,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5cdb2e77c18dfb4033f073b3fcc0409800a764db18e2e93eac517885f5dbe4,PodSandboxId:fd7b9c01c57a4c7e59d710d65496d12404568692cb323860531374a8f9576c2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718623704047491037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v7jgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ab078-568f-4d93-a744-f6abffe8e025,},Annotations:map[string]string{io.kubernetes.container.hash: 6fd799b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f01b6f8d67c6a06c273316e91a016f1dda9bccd08a3b9f130e3fa18000e3f918,PodSandboxId:42e410b99e9861aaf45f17d31f858cb722e68467cb0591c4a84f8b1158560219,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718623702755119502,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8b72m,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2ede9293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:788f3e95f1389861634b7c167ecc4ed0481a5b23af544e031699d17b73670fc8,PodSandboxId:c634101d4334a5e4e7060cc4fa47040a6b8fdbc4a9f0317d22511ddd39517dc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718623700878267082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh4bq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ad51975b-c6bc-4708-8988-004224379e4e,},Annotations:map[string]string{io.kubernetes.container.hash: da5fef91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2daedb04756afc271789d6e861aa2906d06a65ced85f3593810d3b7c83242b7,PodSandboxId:3e6c167dc3d5a7ec1a743469227888109903a3f7c6396ffbb469dd96425fc126,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718623679799019295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7e071f006daa8b88c2389f82277
5e,},Annotations:map[string]string{io.kubernetes.container.hash: 376600c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab681386325c039d54197059416078c59182aa87b148cd254a9ab95e67be20e,PodSandboxId:060b3c7b4cc6c8a64d6da3ecf010956b4746bc8d16b153949312db9fa7aa845b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718623679768119203,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03547e44edb5abf39796dbbc604ea57d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf374fea65b02f5ed17deacbbfaa890808652f70898fb22613a2aada2d9d182d,PodSandboxId:02ebcc677be0b84fdb13e9bee3c75491f5a9fce544fd304b98324e7555ca4a39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718623679769989035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c9fbf3605584b11404e4c74d684666,
},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920ea6bfb6321ca417761a4aacfc34eca33f282901baef10e5ab4e211b318908,PodSandboxId:4a878aa3e3733782505f0a73f31dc066754029efb7cfd43f8dee63152afaed15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718623679688156743,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccdb7a72133ec1402678e9ea7bf51f8d,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 64cb03a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1086deb4-80e5-4466-89f1-e88bd1038abb name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.855481481Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e38495e-f85c-4b1f-a8ef-6ff7ac9fa0a8 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.855567276Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e38495e-f85c-4b1f-a8ef-6ff7ac9fa0a8 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.856837106Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f356ff1-dc94-4493-b9d3-18455fb76100 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.857501036Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624270857477689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f356ff1-dc94-4493-b9d3-18455fb76100 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.858227200Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0db28cfe-4e15-432a-9137-cc2bdf8d8a3d name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.858278750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0db28cfe-4e15-432a-9137-cc2bdf8d8a3d name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.858749804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bf62a6b9c5460fc7170bfcabc3b8873429afdc9358e70ed2c0cfc8e13b2909a,PodSandboxId:c286b6ab5ba224c15b8257108630c758e3d39676f06578be2a885368f5e5ce11,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718624087083315856,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9q9xp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b3438b1-3078-4c3d-918d-7ca302c631df,},Annotations:map[string]string{io.kubernetes.container.hash: 1f994b5a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99311a5f2af018094cefffa1d06ab60bd7f9c78720ef0903446410b62777ab1,PodSandboxId:46330c87989ecc2043660078c62b2d9e6d3e75daaa8bbbd1a068fc5abb4a36cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718624053565286943,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8b72m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2ede9293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9296d7496cc7bd08e1aae16d5835d95d67a137b8155cc6ba963ea9ecee410394,PodSandboxId:09449e67a955075eb2ae95ddfe689cf65d001ce3e0d00d9910b566163db37637,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718624053461613631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v7jgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ab078-568f-4d93-a744-f6abffe8e025,},Annotations:map[string]string{io.kubernetes.container.hash: 6fd799b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e4df51e0870da34508fa6131d228646bb0e4b6f39ea875e4cfc0bab53523821,PodSandboxId:79baefacd0e097350fc4e7414620cacf414d39e32cd1f6262192dca1802dec04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718624053376189920,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh4bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad51975b-c6bc-4708-8988-004224379e4e,},Annotations:map[string]
string{io.kubernetes.container.hash: da5fef91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b63dfaed3cb7550393f5b31b562d7204d27fe2679022292b85d9af81fe12da,PodSandboxId:0be7c33d595ba5e06bec4b3dbc4e40b8d1e5971da79c0d0c74096391b17cf465,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718624053413045791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 647d8c6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49d345565617207048c355eb2fd02d84dc1e79374c65582908fc5c31efb6ace2,PodSandboxId:c61391d3fb7623c31e783d7faa3c91d1379b9d6bdbd9ce72dcee6600422fb8ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718624048549499708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7e071f006daa8b88c2389f822775e,},Annotations:map[string]string{io.kubernetes.container.hash: 376600c6,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96338c1781a120bd164d0cf1ee12bf47c1e4614d990ecef15f2019ec1d01a74,PodSandboxId:0b42b8b7168ef0d5bd5cf5c756f8139d59eb896f84e1efefda6582eefc09b322,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718624048560235961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03547e44edb5abf39796dbbc604ea57d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5521b788f9e29eacd3cdb54d74dda1ede012f42edf74635592934f0b5fd94be,PodSandboxId:629147fd888d740082b57e5ebe69088e9078bfd764cf7c73bf14b94a8f0f1667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718624048555574662,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c9fbf3605584b11404e4c74d684666,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa049dc2107d59ff0e82cf0a7a6b0a809afe251d9199dc55b0ba7a182e31ea78,PodSandboxId:307f2884cb6d3ac7e70a5f2dc45ce3540923e4152f2d914a18827464c407ecf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718624048457260321,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccdb7a72133ec1402678e9ea7bf51f8d,},Annotations:map[string]string{io.kubernetes.container.hash: 64cb03a4,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db87229d8ad6756e3a7db9952290dffb752b7dbe5563ef38ce3ec63e639e87b8,PodSandboxId:2879eb5662c5dc9c90ae5eb1b5e4280cb1f5af7eec47cc04f68c4b60065364fd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718623748160187560,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9q9xp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b3438b1-3078-4c3d-918d-7ca302c631df,},Annotations:map[string]string{io.kubernetes.container.hash: 1f994b5a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1209b62c2e74e6bdf46e660a5944ebf4572603fe1e3f6125bd6533f824858fb,PodSandboxId:e36a7741cda4f83965573401b89d4d9c35ae2374a327ab7c64df3c89765bc519,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718623704020718026,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,},Annotations:map[string]string{io.kubernetes.container.hash: 647d8c6e,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5cdb2e77c18dfb4033f073b3fcc0409800a764db18e2e93eac517885f5dbe4,PodSandboxId:fd7b9c01c57a4c7e59d710d65496d12404568692cb323860531374a8f9576c2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718623704047491037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v7jgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ab078-568f-4d93-a744-f6abffe8e025,},Annotations:map[string]string{io.kubernetes.container.hash: 6fd799b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f01b6f8d67c6a06c273316e91a016f1dda9bccd08a3b9f130e3fa18000e3f918,PodSandboxId:42e410b99e9861aaf45f17d31f858cb722e68467cb0591c4a84f8b1158560219,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718623702755119502,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8b72m,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2ede9293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:788f3e95f1389861634b7c167ecc4ed0481a5b23af544e031699d17b73670fc8,PodSandboxId:c634101d4334a5e4e7060cc4fa47040a6b8fdbc4a9f0317d22511ddd39517dc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718623700878267082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh4bq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ad51975b-c6bc-4708-8988-004224379e4e,},Annotations:map[string]string{io.kubernetes.container.hash: da5fef91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2daedb04756afc271789d6e861aa2906d06a65ced85f3593810d3b7c83242b7,PodSandboxId:3e6c167dc3d5a7ec1a743469227888109903a3f7c6396ffbb469dd96425fc126,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718623679799019295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7e071f006daa8b88c2389f82277
5e,},Annotations:map[string]string{io.kubernetes.container.hash: 376600c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab681386325c039d54197059416078c59182aa87b148cd254a9ab95e67be20e,PodSandboxId:060b3c7b4cc6c8a64d6da3ecf010956b4746bc8d16b153949312db9fa7aa845b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718623679768119203,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03547e44edb5abf39796dbbc604ea57d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf374fea65b02f5ed17deacbbfaa890808652f70898fb22613a2aada2d9d182d,PodSandboxId:02ebcc677be0b84fdb13e9bee3c75491f5a9fce544fd304b98324e7555ca4a39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718623679769989035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c9fbf3605584b11404e4c74d684666,
},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920ea6bfb6321ca417761a4aacfc34eca33f282901baef10e5ab4e211b318908,PodSandboxId:4a878aa3e3733782505f0a73f31dc066754029efb7cfd43f8dee63152afaed15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718623679688156743,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccdb7a72133ec1402678e9ea7bf51f8d,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 64cb03a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0db28cfe-4e15-432a-9137-cc2bdf8d8a3d name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.899295725Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e01022c7-cc2d-479c-a21d-7a1b12d1af50 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.899359939Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e01022c7-cc2d-479c-a21d-7a1b12d1af50 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.900235473Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f348e6a-095a-40d1-8830-15275d66a247 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.900910990Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624270900887896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f348e6a-095a-40d1-8830-15275d66a247 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.901400597Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=850d04bc-6765-484d-adce-7fff40e8012e name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.901449996Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=850d04bc-6765-484d-adce-7fff40e8012e name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:37:50 multinode-353869 crio[2886]: time="2024-06-17 11:37:50.901826461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bf62a6b9c5460fc7170bfcabc3b8873429afdc9358e70ed2c0cfc8e13b2909a,PodSandboxId:c286b6ab5ba224c15b8257108630c758e3d39676f06578be2a885368f5e5ce11,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718624087083315856,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9q9xp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b3438b1-3078-4c3d-918d-7ca302c631df,},Annotations:map[string]string{io.kubernetes.container.hash: 1f994b5a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99311a5f2af018094cefffa1d06ab60bd7f9c78720ef0903446410b62777ab1,PodSandboxId:46330c87989ecc2043660078c62b2d9e6d3e75daaa8bbbd1a068fc5abb4a36cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718624053565286943,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8b72m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2ede9293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9296d7496cc7bd08e1aae16d5835d95d67a137b8155cc6ba963ea9ecee410394,PodSandboxId:09449e67a955075eb2ae95ddfe689cf65d001ce3e0d00d9910b566163db37637,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718624053461613631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v7jgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ab078-568f-4d93-a744-f6abffe8e025,},Annotations:map[string]string{io.kubernetes.container.hash: 6fd799b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e4df51e0870da34508fa6131d228646bb0e4b6f39ea875e4cfc0bab53523821,PodSandboxId:79baefacd0e097350fc4e7414620cacf414d39e32cd1f6262192dca1802dec04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718624053376189920,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh4bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad51975b-c6bc-4708-8988-004224379e4e,},Annotations:map[string]
string{io.kubernetes.container.hash: da5fef91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b63dfaed3cb7550393f5b31b562d7204d27fe2679022292b85d9af81fe12da,PodSandboxId:0be7c33d595ba5e06bec4b3dbc4e40b8d1e5971da79c0d0c74096391b17cf465,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718624053413045791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 647d8c6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49d345565617207048c355eb2fd02d84dc1e79374c65582908fc5c31efb6ace2,PodSandboxId:c61391d3fb7623c31e783d7faa3c91d1379b9d6bdbd9ce72dcee6600422fb8ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718624048549499708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7e071f006daa8b88c2389f822775e,},Annotations:map[string]string{io.kubernetes.container.hash: 376600c6,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96338c1781a120bd164d0cf1ee12bf47c1e4614d990ecef15f2019ec1d01a74,PodSandboxId:0b42b8b7168ef0d5bd5cf5c756f8139d59eb896f84e1efefda6582eefc09b322,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718624048560235961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03547e44edb5abf39796dbbc604ea57d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5521b788f9e29eacd3cdb54d74dda1ede012f42edf74635592934f0b5fd94be,PodSandboxId:629147fd888d740082b57e5ebe69088e9078bfd764cf7c73bf14b94a8f0f1667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718624048555574662,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c9fbf3605584b11404e4c74d684666,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa049dc2107d59ff0e82cf0a7a6b0a809afe251d9199dc55b0ba7a182e31ea78,PodSandboxId:307f2884cb6d3ac7e70a5f2dc45ce3540923e4152f2d914a18827464c407ecf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718624048457260321,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccdb7a72133ec1402678e9ea7bf51f8d,},Annotations:map[string]string{io.kubernetes.container.hash: 64cb03a4,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db87229d8ad6756e3a7db9952290dffb752b7dbe5563ef38ce3ec63e639e87b8,PodSandboxId:2879eb5662c5dc9c90ae5eb1b5e4280cb1f5af7eec47cc04f68c4b60065364fd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718623748160187560,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9q9xp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b3438b1-3078-4c3d-918d-7ca302c631df,},Annotations:map[string]string{io.kubernetes.container.hash: 1f994b5a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1209b62c2e74e6bdf46e660a5944ebf4572603fe1e3f6125bd6533f824858fb,PodSandboxId:e36a7741cda4f83965573401b89d4d9c35ae2374a327ab7c64df3c89765bc519,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718623704020718026,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7,},Annotations:map[string]string{io.kubernetes.container.hash: 647d8c6e,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5cdb2e77c18dfb4033f073b3fcc0409800a764db18e2e93eac517885f5dbe4,PodSandboxId:fd7b9c01c57a4c7e59d710d65496d12404568692cb323860531374a8f9576c2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718623704047491037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v7jgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ab078-568f-4d93-a744-f6abffe8e025,},Annotations:map[string]string{io.kubernetes.container.hash: 6fd799b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f01b6f8d67c6a06c273316e91a016f1dda9bccd08a3b9f130e3fa18000e3f918,PodSandboxId:42e410b99e9861aaf45f17d31f858cb722e68467cb0591c4a84f8b1158560219,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718623702755119502,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8b72m,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2ede9293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:788f3e95f1389861634b7c167ecc4ed0481a5b23af544e031699d17b73670fc8,PodSandboxId:c634101d4334a5e4e7060cc4fa47040a6b8fdbc4a9f0317d22511ddd39517dc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718623700878267082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh4bq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ad51975b-c6bc-4708-8988-004224379e4e,},Annotations:map[string]string{io.kubernetes.container.hash: da5fef91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2daedb04756afc271789d6e861aa2906d06a65ced85f3593810d3b7c83242b7,PodSandboxId:3e6c167dc3d5a7ec1a743469227888109903a3f7c6396ffbb469dd96425fc126,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718623679799019295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7e071f006daa8b88c2389f82277
5e,},Annotations:map[string]string{io.kubernetes.container.hash: 376600c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab681386325c039d54197059416078c59182aa87b148cd254a9ab95e67be20e,PodSandboxId:060b3c7b4cc6c8a64d6da3ecf010956b4746bc8d16b153949312db9fa7aa845b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718623679768119203,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03547e44edb5abf39796dbbc604ea57d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf374fea65b02f5ed17deacbbfaa890808652f70898fb22613a2aada2d9d182d,PodSandboxId:02ebcc677be0b84fdb13e9bee3c75491f5a9fce544fd304b98324e7555ca4a39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718623679769989035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c9fbf3605584b11404e4c74d684666,
},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920ea6bfb6321ca417761a4aacfc34eca33f282901baef10e5ab4e211b318908,PodSandboxId:4a878aa3e3733782505f0a73f31dc066754029efb7cfd43f8dee63152afaed15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718623679688156743,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-353869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccdb7a72133ec1402678e9ea7bf51f8d,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 64cb03a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=850d04bc-6765-484d-adce-7fff40e8012e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4bf62a6b9c546       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   c286b6ab5ba22       busybox-fc5497c4f-9q9xp
	c99311a5f2af0       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      3 minutes ago       Running             kindnet-cni               1                   46330c87989ec       kindnet-8b72m
	9296d7496cc7b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   09449e67a9550       coredns-7db6d8ff4d-v7jgc
	f8b63dfaed3cb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   0be7c33d595ba       storage-provisioner
	8e4df51e0870d       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      3 minutes ago       Running             kube-proxy                1                   79baefacd0e09       kube-proxy-lh4bq
	d96338c1781a1       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      3 minutes ago       Running             kube-scheduler            1                   0b42b8b7168ef       kube-scheduler-multinode-353869
	b5521b788f9e2       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      3 minutes ago       Running             kube-controller-manager   1                   629147fd888d7       kube-controller-manager-multinode-353869
	49d3455656172       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   c61391d3fb762       etcd-multinode-353869
	aa049dc2107d5       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      3 minutes ago       Running             kube-apiserver            1                   307f2884cb6d3       kube-apiserver-multinode-353869
	db87229d8ad67       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   2879eb5662c5d       busybox-fc5497c4f-9q9xp
	bb5cdb2e77c18       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   fd7b9c01c57a4       coredns-7db6d8ff4d-v7jgc
	c1209b62c2e74       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   e36a7741cda4f       storage-provisioner
	f01b6f8d67c6a       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    9 minutes ago       Exited              kindnet-cni               0                   42e410b99e986       kindnet-8b72m
	788f3e95f1389       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      9 minutes ago       Exited              kube-proxy                0                   c634101d4334a       kube-proxy-lh4bq
	e2daedb04756a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      9 minutes ago       Exited              etcd                      0                   3e6c167dc3d5a       etcd-multinode-353869
	cf374fea65b02       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      9 minutes ago       Exited              kube-controller-manager   0                   02ebcc677be0b       kube-controller-manager-multinode-353869
	5ab681386325c       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      9 minutes ago       Exited              kube-scheduler            0                   060b3c7b4cc6c       kube-scheduler-multinode-353869
	920ea6bfb6321       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      9 minutes ago       Exited              kube-apiserver            0                   4a878aa3e3733       kube-apiserver-multinode-353869
	
	
	==> coredns [9296d7496cc7bd08e1aae16d5835d95d67a137b8155cc6ba963ea9ecee410394] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49578 - 34953 "HINFO IN 6714190310131197315.2879280047215912249. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0124875s
	
	
	==> coredns [bb5cdb2e77c18dfb4033f073b3fcc0409800a764db18e2e93eac517885f5dbe4] <==
	[INFO] 10.244.1.2:54915 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001672642s
	[INFO] 10.244.1.2:54152 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00007146s
	[INFO] 10.244.1.2:51062 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103759s
	[INFO] 10.244.1.2:41957 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001250764s
	[INFO] 10.244.1.2:37232 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058333s
	[INFO] 10.244.1.2:48361 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000054854s
	[INFO] 10.244.1.2:39552 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057381s
	[INFO] 10.244.0.3:35057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000074603s
	[INFO] 10.244.0.3:56142 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000052163s
	[INFO] 10.244.0.3:34638 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000034473s
	[INFO] 10.244.0.3:49973 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000403s
	[INFO] 10.244.1.2:48527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153637s
	[INFO] 10.244.1.2:55209 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106422s
	[INFO] 10.244.1.2:51699 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096153s
	[INFO] 10.244.1.2:37049 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067269s
	[INFO] 10.244.0.3:40716 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139894s
	[INFO] 10.244.0.3:60151 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000228445s
	[INFO] 10.244.0.3:38509 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000062757s
	[INFO] 10.244.0.3:34140 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000058042s
	[INFO] 10.244.1.2:48549 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124055s
	[INFO] 10.244.1.2:50145 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085629s
	[INFO] 10.244.1.2:33962 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000070477s
	[INFO] 10.244.1.2:44280 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000197604s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-353869
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-353869
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=multinode-353869
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T11_28_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:28:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-353869
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:37:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:34:11 +0000   Mon, 17 Jun 2024 11:28:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:34:11 +0000   Mon, 17 Jun 2024 11:28:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:34:11 +0000   Mon, 17 Jun 2024 11:28:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:34:11 +0000   Mon, 17 Jun 2024 11:28:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    multinode-353869
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1260748e9f44f50b943a7c29ebbe615
	  System UUID:                c1260748-e9f4-4f50-b943-a7c29ebbe615
	  Boot ID:                    02106cd4-ca66-467d-b16d-bcee11d84f85
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9q9xp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m45s
	  kube-system                 coredns-7db6d8ff4d-v7jgc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m31s
	  kube-system                 etcd-multinode-353869                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m46s
	  kube-system                 kindnet-8b72m                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m32s
	  kube-system                 kube-apiserver-multinode-353869             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m47s
	  kube-system                 kube-controller-manager-multinode-353869    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m46s
	  kube-system                 kube-proxy-lh4bq                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                 kube-scheduler-multinode-353869             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m47s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m30s                  kube-proxy       
	  Normal  Starting                 3m37s                  kube-proxy       
	  Normal  Starting                 9m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m52s (x8 over 9m53s)  kubelet          Node multinode-353869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m52s (x8 over 9m53s)  kubelet          Node multinode-353869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m52s (x7 over 9m53s)  kubelet          Node multinode-353869 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    9m46s                  kubelet          Node multinode-353869 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  9m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m46s                  kubelet          Node multinode-353869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     9m46s                  kubelet          Node multinode-353869 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m46s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m32s                  node-controller  Node multinode-353869 event: Registered Node multinode-353869 in Controller
	  Normal  NodeReady                9m28s                  kubelet          Node multinode-353869 status is now: NodeReady
	  Normal  Starting                 3m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m44s (x8 over 3m44s)  kubelet          Node multinode-353869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m44s (x8 over 3m44s)  kubelet          Node multinode-353869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m44s (x7 over 3m44s)  kubelet          Node multinode-353869 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m27s                  node-controller  Node multinode-353869 event: Registered Node multinode-353869 in Controller
	
	
	Name:               multinode-353869-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-353869-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=multinode-353869
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_17T11_34_53_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:34:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-353869-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:35:23 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 17 Jun 2024 11:35:23 +0000   Mon, 17 Jun 2024 11:36:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 17 Jun 2024 11:35:23 +0000   Mon, 17 Jun 2024 11:36:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 17 Jun 2024 11:35:23 +0000   Mon, 17 Jun 2024 11:36:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 17 Jun 2024 11:35:23 +0000   Mon, 17 Jun 2024 11:36:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.46
	  Hostname:    multinode-353869-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 31b0aa4474a1474bb215c773448e1c71
	  System UUID:                31b0aa44-74a1-474b-b215-c773448e1c71
	  Boot ID:                    efd768df-1f7d-4013-9de3-3660cdbd4baf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gpwz7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m3s
	  kube-system                 kindnet-stgvs              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m56s
	  kube-system                 kube-proxy-sxh4c           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m55s                  kube-proxy       
	  Normal  Starting                 8m49s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m56s (x2 over 8m56s)  kubelet          Node multinode-353869-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m56s (x2 over 8m56s)  kubelet          Node multinode-353869-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m56s (x2 over 8m56s)  kubelet          Node multinode-353869-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m47s                  kubelet          Node multinode-353869-m02 status is now: NodeReady
	  Normal  Starting                 2m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m59s (x2 over 2m59s)  kubelet          Node multinode-353869-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m59s (x2 over 2m59s)  kubelet          Node multinode-353869-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m59s (x2 over 2m59s)  kubelet          Node multinode-353869-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m53s                  kubelet          Node multinode-353869-m02 status is now: NodeReady
	  Normal  NodeNotReady             107s                   node-controller  Node multinode-353869-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.056587] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061400] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.171263] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.142452] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.269321] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.125245] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +4.050883] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.062824] kauditd_printk_skb: 158 callbacks suppressed
	[Jun17 11:28] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.078042] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.764929] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.811463] systemd-fstab-generator[1483]: Ignoring "noauto" option for root device
	[Jun17 11:29] kauditd_printk_skb: 82 callbacks suppressed
	[Jun17 11:33] systemd-fstab-generator[2799]: Ignoring "noauto" option for root device
	[  +0.139352] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +0.157664] systemd-fstab-generator[2825]: Ignoring "noauto" option for root device
	[  +0.141511] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +0.289575] systemd-fstab-generator[2865]: Ignoring "noauto" option for root device
	[Jun17 11:34] systemd-fstab-generator[2969]: Ignoring "noauto" option for root device
	[  +0.080243] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.616860] systemd-fstab-generator[3095]: Ignoring "noauto" option for root device
	[  +5.686652] kauditd_printk_skb: 74 callbacks suppressed
	[ +11.833679] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.873293] systemd-fstab-generator[3906]: Ignoring "noauto" option for root device
	[ +19.055507] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [49d345565617207048c355eb2fd02d84dc1e79374c65582908fc5c31efb6ace2] <==
	{"level":"info","ts":"2024-06-17T11:34:09.034106Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-17T11:34:09.034115Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-17T11:34:09.034358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 switched to configuration voters=(2455236677277094933)"}
	{"level":"info","ts":"2024-06-17T11:34:09.034435Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3ecd98d5111bce24","local-member-id":"2212c0bfe49c9415","added-peer-id":"2212c0bfe49c9415","added-peer-peer-urls":["https://192.168.39.17:2380"]}
	{"level":"info","ts":"2024-06-17T11:34:09.034577Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3ecd98d5111bce24","local-member-id":"2212c0bfe49c9415","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:34:09.034618Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:34:09.047493Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-17T11:34:09.047841Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2212c0bfe49c9415","initial-advertise-peer-urls":["https://192.168.39.17:2380"],"listen-peer-urls":["https://192.168.39.17:2380"],"advertise-client-urls":["https://192.168.39.17:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.17:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-17T11:34:09.047895Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-17T11:34:09.048047Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.17:2380"}
	{"level":"info","ts":"2024-06-17T11:34:09.048074Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.17:2380"}
	{"level":"info","ts":"2024-06-17T11:34:10.598584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-17T11:34:10.598762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-17T11:34:10.598832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 received MsgPreVoteResp from 2212c0bfe49c9415 at term 2"}
	{"level":"info","ts":"2024-06-17T11:34:10.598887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 became candidate at term 3"}
	{"level":"info","ts":"2024-06-17T11:34:10.598913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 received MsgVoteResp from 2212c0bfe49c9415 at term 3"}
	{"level":"info","ts":"2024-06-17T11:34:10.598939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 became leader at term 3"}
	{"level":"info","ts":"2024-06-17T11:34:10.598968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2212c0bfe49c9415 elected leader 2212c0bfe49c9415 at term 3"}
	{"level":"info","ts":"2024-06-17T11:34:10.607201Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2212c0bfe49c9415","local-member-attributes":"{Name:multinode-353869 ClientURLs:[https://192.168.39.17:2379]}","request-path":"/0/members/2212c0bfe49c9415/attributes","cluster-id":"3ecd98d5111bce24","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-17T11:34:10.60722Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T11:34:10.607465Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-17T11:34:10.607499Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-17T11:34:10.607249Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T11:34:10.609797Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.17:2379"}
	{"level":"info","ts":"2024-06-17T11:34:10.609806Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [e2daedb04756afc271789d6e861aa2906d06a65ced85f3593810d3b7c83242b7] <==
	{"level":"info","ts":"2024-06-17T11:28:20.008574Z","caller":"traceutil/trace.go:171","msg":"trace[372394324] transaction","detail":"{read_only:false; response_revision:362; number_of_response:1; }","duration":"161.31718ms","start":"2024-06-17T11:28:19.847252Z","end":"2024-06-17T11:28:20.008569Z","steps":["trace[372394324] 'process raft request'  (duration: 161.021674ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T11:28:20.008681Z","caller":"traceutil/trace.go:171","msg":"trace[497822731] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"161.361752ms","start":"2024-06-17T11:28:19.847313Z","end":"2024-06-17T11:28:20.008675Z","steps":["trace[497822731] 'process raft request'  (duration: 160.97444ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T11:28:20.008754Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.591538ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-06-17T11:28:20.008871Z","caller":"traceutil/trace.go:171","msg":"trace[1714599716] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:365; }","duration":"161.752282ms","start":"2024-06-17T11:28:19.84711Z","end":"2024-06-17T11:28:20.008862Z","steps":["trace[1714599716] 'agreement among raft nodes before linearized reading'  (duration: 161.58823ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T11:28:20.009082Z","caller":"traceutil/trace.go:171","msg":"trace[884657230] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"136.112857ms","start":"2024-06-17T11:28:19.872961Z","end":"2024-06-17T11:28:20.009074Z","steps":["trace[884657230] 'process raft request'  (duration: 135.355776ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T11:28:20.00917Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.144241ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2024-06-17T11:28:20.00919Z","caller":"traceutil/trace.go:171","msg":"trace[1061461294] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:365; }","duration":"125.179538ms","start":"2024-06-17T11:28:19.884004Z","end":"2024-06-17T11:28:20.009184Z","steps":["trace[1061461294] 'agreement among raft nodes before linearized reading'  (duration: 125.14839ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T11:28:20.0092Z","caller":"traceutil/trace.go:171","msg":"trace[1865179336] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"125.978459ms","start":"2024-06-17T11:28:19.883217Z","end":"2024-06-17T11:28:20.009195Z","steps":["trace[1865179336] 'process raft request'  (duration: 125.128546ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T11:28:20.008845Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.656752ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-06-17T11:28:20.009257Z","caller":"traceutil/trace.go:171","msg":"trace[1558316807] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:365; }","duration":"126.164242ms","start":"2024-06-17T11:28:19.883088Z","end":"2024-06-17T11:28:20.009253Z","steps":["trace[1558316807] 'agreement among raft nodes before linearized reading'  (duration: 125.659041ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T11:28:55.973733Z","caller":"traceutil/trace.go:171","msg":"trace[1039546806] transaction","detail":"{read_only:false; response_revision:489; number_of_response:1; }","duration":"187.932764ms","start":"2024-06-17T11:28:55.785787Z","end":"2024-06-17T11:28:55.97372Z","steps":["trace[1039546806] 'process raft request'  (duration: 187.891744ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T11:28:55.973951Z","caller":"traceutil/trace.go:171","msg":"trace[917262151] transaction","detail":"{read_only:false; response_revision:488; number_of_response:1; }","duration":"226.101513ms","start":"2024-06-17T11:28:55.747835Z","end":"2024-06-17T11:28:55.973937Z","steps":["trace[917262151] 'process raft request'  (duration: 186.785982ms)","trace[917262151] 'compare'  (duration: 38.758778ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-17T11:29:36.363285Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.799133ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10670593384809681442 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-353869-m03.17d9c737fbd6718c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-353869-m03.17d9c737fbd6718c\" value_size:642 lease:1447221347954905411 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-06-17T11:29:36.363984Z","caller":"traceutil/trace.go:171","msg":"trace[1148999579] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"200.243669ms","start":"2024-06-17T11:29:36.163709Z","end":"2024-06-17T11:29:36.363952Z","steps":["trace[1148999579] 'process raft request'  (duration: 200.019907ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T11:29:36.364009Z","caller":"traceutil/trace.go:171","msg":"trace[1395434200] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"261.258051ms","start":"2024-06-17T11:29:36.102727Z","end":"2024-06-17T11:29:36.363985Z","steps":["trace[1395434200] 'process raft request'  (duration: 77.865863ms)","trace[1395434200] 'compare'  (duration: 181.662985ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-17T11:32:27.192466Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-17T11:32:27.192565Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-353869","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.17:2380"],"advertise-client-urls":["https://192.168.39.17:2379"]}
	{"level":"warn","ts":"2024-06-17T11:32:27.192787Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-17T11:32:27.192958Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-17T11:32:27.271586Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.17:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-17T11:32:27.271766Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.17:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-17T11:32:27.271854Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"2212c0bfe49c9415","current-leader-member-id":"2212c0bfe49c9415"}
	{"level":"info","ts":"2024-06-17T11:32:27.274447Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.17:2380"}
	{"level":"info","ts":"2024-06-17T11:32:27.274607Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.17:2380"}
	{"level":"info","ts":"2024-06-17T11:32:27.274684Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-353869","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.17:2380"],"advertise-client-urls":["https://192.168.39.17:2379"]}
	
	
	==> kernel <==
	 11:37:51 up 10 min,  0 users,  load average: 0.01, 0.13, 0.09
	Linux multinode-353869 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c99311a5f2af018094cefffa1d06ab60bd7f9c78720ef0903446410b62777ab1] <==
	I0617 11:36:44.535039       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	I0617 11:36:54.548745       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0617 11:36:54.548868       1 main.go:227] handling current node
	I0617 11:36:54.548912       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0617 11:36:54.548962       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	I0617 11:37:04.554548       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0617 11:37:04.554591       1 main.go:227] handling current node
	I0617 11:37:04.554606       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0617 11:37:04.554614       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	I0617 11:37:14.559012       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0617 11:37:14.559052       1 main.go:227] handling current node
	I0617 11:37:14.559063       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0617 11:37:14.559068       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	I0617 11:37:24.565489       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0617 11:37:24.565530       1 main.go:227] handling current node
	I0617 11:37:24.565550       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0617 11:37:24.565555       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	I0617 11:37:34.576187       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0617 11:37:34.576357       1 main.go:227] handling current node
	I0617 11:37:34.576399       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0617 11:37:34.576419       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	I0617 11:37:44.590858       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0617 11:37:44.591284       1 main.go:227] handling current node
	I0617 11:37:44.591356       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0617 11:37:44.591393       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [f01b6f8d67c6a06c273316e91a016f1dda9bccd08a3b9f130e3fa18000e3f918] <==
	I0617 11:31:43.709534       1 main.go:250] Node multinode-353869-m03 has CIDR [10.244.3.0/24] 
	I0617 11:31:53.720068       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0617 11:31:53.720182       1 main.go:227] handling current node
	I0617 11:31:53.720207       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0617 11:31:53.720224       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	I0617 11:31:53.720403       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0617 11:31:53.720428       1 main.go:250] Node multinode-353869-m03 has CIDR [10.244.3.0/24] 
	I0617 11:32:03.735273       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0617 11:32:03.735459       1 main.go:227] handling current node
	I0617 11:32:03.735492       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0617 11:32:03.735574       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	I0617 11:32:03.735934       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0617 11:32:03.736026       1 main.go:250] Node multinode-353869-m03 has CIDR [10.244.3.0/24] 
	I0617 11:32:13.747500       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0617 11:32:13.747584       1 main.go:227] handling current node
	I0617 11:32:13.747609       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0617 11:32:13.747680       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	I0617 11:32:13.747833       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0617 11:32:13.747857       1 main.go:250] Node multinode-353869-m03 has CIDR [10.244.3.0/24] 
	I0617 11:32:23.753222       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0617 11:32:23.753518       1 main.go:227] handling current node
	I0617 11:32:23.753681       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0617 11:32:23.753723       1 main.go:250] Node multinode-353869-m02 has CIDR [10.244.1.0/24] 
	I0617 11:32:23.753868       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0617 11:32:23.753899       1 main.go:250] Node multinode-353869-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [920ea6bfb6321ca417761a4aacfc34eca33f282901baef10e5ab4e211b318908] <==
	I0617 11:32:27.201895       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0617 11:32:27.201903       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0617 11:32:27.201908       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0617 11:32:27.201914       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0617 11:32:27.201922       1 controller.go:129] Ending legacy_token_tracking_controller
	I0617 11:32:27.220138       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0617 11:32:27.201927       1 available_controller.go:439] Shutting down AvailableConditionController
	I0617 11:32:27.201937       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0617 11:32:27.201943       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	W0617 11:32:27.220518       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.220580       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.220674       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.220705       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.220822       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.221095       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.223065       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.223180       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.223253       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.223304       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.223357       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.223403       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.223437       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.223484       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.223538       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0617 11:32:27.224121       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [aa049dc2107d59ff0e82cf0a7a6b0a809afe251d9199dc55b0ba7a182e31ea78] <==
	I0617 11:34:11.872595       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0617 11:34:11.872830       1 aggregator.go:165] initial CRD sync complete...
	I0617 11:34:11.872861       1 autoregister_controller.go:141] Starting autoregister controller
	I0617 11:34:11.872883       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0617 11:34:11.824232       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0617 11:34:11.932024       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0617 11:34:11.932951       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0617 11:34:11.933426       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0617 11:34:11.933933       1 shared_informer.go:320] Caches are synced for configmaps
	I0617 11:34:11.934269       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0617 11:34:11.934299       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0617 11:34:11.939086       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0617 11:34:11.939238       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0617 11:34:11.943611       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0617 11:34:11.943688       1 policy_source.go:224] refreshing policies
	I0617 11:34:11.969500       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0617 11:34:11.974355       1 cache.go:39] Caches are synced for autoregister controller
	I0617 11:34:12.838339       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0617 11:34:14.236565       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0617 11:34:14.421016       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0617 11:34:14.444105       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0617 11:34:14.545552       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0617 11:34:14.558889       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0617 11:34:24.934527       1 controller.go:615] quota admission added evaluator for: endpoints
	I0617 11:34:25.084791       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b5521b788f9e29eacd3cdb54d74dda1ede012f42edf74635592934f0b5fd94be] <==
	I0617 11:34:52.646030       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-353869-m02\" does not exist"
	I0617 11:34:52.657688       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-353869-m02" podCIDRs=["10.244.1.0/24"]
	I0617 11:34:53.528418       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.81µs"
	I0617 11:34:53.576429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.293µs"
	I0617 11:34:53.588407       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.968µs"
	I0617 11:34:53.614905       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.269µs"
	I0617 11:34:53.622506       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.392µs"
	I0617 11:34:53.626359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.032µs"
	I0617 11:34:58.678258       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	I0617 11:34:58.696009       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.918µs"
	I0617 11:34:58.707945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.893µs"
	I0617 11:35:00.078145       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.347617ms"
	I0617 11:35:00.078318       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.749µs"
	I0617 11:35:17.046994       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	I0617 11:35:18.111060       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	I0617 11:35:18.112069       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-353869-m03\" does not exist"
	I0617 11:35:18.123136       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-353869-m03" podCIDRs=["10.244.2.0/24"]
	I0617 11:35:24.335519       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	I0617 11:35:29.793395       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	I0617 11:36:04.947925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.606926ms"
	I0617 11:36:04.948031       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.664µs"
	I0617 11:36:44.796813       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-h9qzc"
	I0617 11:36:44.818535       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-h9qzc"
	I0617 11:36:44.818575       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-wjcx6"
	I0617 11:36:44.841349       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-wjcx6"
	
	
	==> kube-controller-manager [cf374fea65b02f5ed17deacbbfaa890808652f70898fb22613a2aada2d9d182d] <==
	I0617 11:28:24.448239       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.998µs"
	I0617 11:28:55.977716       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-353869-m02\" does not exist"
	I0617 11:28:55.999192       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-353869-m02" podCIDRs=["10.244.1.0/24"]
	I0617 11:28:59.189863       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-353869-m02"
	I0617 11:29:04.468855       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	I0617 11:29:06.687428       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.498756ms"
	I0617 11:29:06.714708       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.206472ms"
	I0617 11:29:06.714809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.598µs"
	I0617 11:29:08.575226       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.784737ms"
	I0617 11:29:08.575319       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.576µs"
	I0617 11:29:08.782594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.624527ms"
	I0617 11:29:08.783251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.134µs"
	I0617 11:29:36.368737       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-353869-m03\" does not exist"
	I0617 11:29:36.369410       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	I0617 11:29:36.381361       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-353869-m03" podCIDRs=["10.244.2.0/24"]
	I0617 11:29:39.208245       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-353869-m03"
	I0617 11:29:44.348893       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m03"
	I0617 11:30:12.916288       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	I0617 11:30:14.166187       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-353869-m03\" does not exist"
	I0617 11:30:14.167441       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	I0617 11:30:14.189594       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-353869-m03" podCIDRs=["10.244.3.0/24"]
	I0617 11:30:21.232407       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m02"
	I0617 11:30:59.260621       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353869-m03"
	I0617 11:30:59.276725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.629703ms"
	I0617 11:30:59.276950       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.624µs"
	
	
	==> kube-proxy [788f3e95f1389861634b7c167ecc4ed0481a5b23af544e031699d17b73670fc8] <==
	I0617 11:28:21.015327       1 server_linux.go:69] "Using iptables proxy"
	I0617 11:28:21.023537       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.17"]
	I0617 11:28:21.083716       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0617 11:28:21.083802       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0617 11:28:21.083821       1 server_linux.go:165] "Using iptables Proxier"
	I0617 11:28:21.088571       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 11:28:21.089068       1 server.go:872] "Version info" version="v1.30.1"
	I0617 11:28:21.089113       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:28:21.091156       1 config.go:192] "Starting service config controller"
	I0617 11:28:21.091205       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 11:28:21.091468       1 config.go:101] "Starting endpoint slice config controller"
	I0617 11:28:21.091497       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0617 11:28:21.092816       1 config.go:319] "Starting node config controller"
	I0617 11:28:21.092845       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 11:28:21.192154       1 shared_informer.go:320] Caches are synced for service config
	I0617 11:28:21.192154       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0617 11:28:21.193491       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [8e4df51e0870da34508fa6131d228646bb0e4b6f39ea875e4cfc0bab53523821] <==
	I0617 11:34:13.762808       1 server_linux.go:69] "Using iptables proxy"
	I0617 11:34:13.784880       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.17"]
	I0617 11:34:13.926267       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0617 11:34:13.927828       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0617 11:34:13.927956       1 server_linux.go:165] "Using iptables Proxier"
	I0617 11:34:13.933520       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 11:34:13.933782       1 server.go:872] "Version info" version="v1.30.1"
	I0617 11:34:13.933835       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:34:13.935165       1 config.go:192] "Starting service config controller"
	I0617 11:34:13.935203       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 11:34:13.935231       1 config.go:101] "Starting endpoint slice config controller"
	I0617 11:34:13.935252       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0617 11:34:13.935967       1 config.go:319] "Starting node config controller"
	I0617 11:34:13.935995       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 11:34:14.035724       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0617 11:34:14.035874       1 shared_informer.go:320] Caches are synced for service config
	I0617 11:34:14.036131       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5ab681386325c039d54197059416078c59182aa87b148cd254a9ab95e67be20e] <==
	W0617 11:28:02.623948       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0617 11:28:02.626216       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0617 11:28:03.442122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0617 11:28:03.442173       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0617 11:28:03.453335       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0617 11:28:03.453378       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 11:28:03.526691       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0617 11:28:03.526744       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0617 11:28:03.594831       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0617 11:28:03.595826       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0617 11:28:03.617863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0617 11:28:03.618598       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0617 11:28:03.630195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0617 11:28:03.630308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0617 11:28:03.675187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0617 11:28:03.675357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0617 11:28:03.696083       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0617 11:28:03.697089       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0617 11:28:03.819274       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0617 11:28:03.819361       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0617 11:28:03.874483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0617 11:28:03.874529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0617 11:28:06.013449       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0617 11:32:27.186961       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0617 11:32:27.187598       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d96338c1781a120bd164d0cf1ee12bf47c1e4614d990ecef15f2019ec1d01a74] <==
	I0617 11:34:09.188327       1 serving.go:380] Generated self-signed cert in-memory
	W0617 11:34:11.897125       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0617 11:34:11.897222       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 11:34:11.897232       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0617 11:34:11.897239       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0617 11:34:11.922277       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0617 11:34:11.922324       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:34:11.924139       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0617 11:34:11.924238       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0617 11:34:11.924213       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0617 11:34:11.924238       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0617 11:34:12.024805       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.825431    3102 topology_manager.go:215] "Topology Admit Handler" podUID="41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7" podNamespace="kube-system" podName="storage-provisioner"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.825612    3102 topology_manager.go:215] "Topology Admit Handler" podUID="3b3438b1-3078-4c3d-918d-7ca302c631df" podNamespace="default" podName="busybox-fc5497c4f-9q9xp"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.840605    3102 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.910024    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad51975b-c6bc-4708-8988-004224379e4e-lib-modules\") pod \"kube-proxy-lh4bq\" (UID: \"ad51975b-c6bc-4708-8988-004224379e4e\") " pod="kube-system/kube-proxy-lh4bq"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.910204    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7-tmp\") pod \"storage-provisioner\" (UID: \"41dea9a1-1f60-4a87-b8c1-9b0ecc3742c7\") " pod="kube-system/storage-provisioner"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.910279    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b-cni-cfg\") pod \"kindnet-8b72m\" (UID: \"f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b\") " pod="kube-system/kindnet-8b72m"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.910346    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b-lib-modules\") pod \"kindnet-8b72m\" (UID: \"f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b\") " pod="kube-system/kindnet-8b72m"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.910389    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b-xtables-lock\") pod \"kindnet-8b72m\" (UID: \"f0e82fc8-8881-4fdd-9f8e-5677e69b8c3b\") " pod="kube-system/kindnet-8b72m"
	Jun 17 11:34:12 multinode-353869 kubelet[3102]: I0617 11:34:12.910429    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad51975b-c6bc-4708-8988-004224379e4e-xtables-lock\") pod \"kube-proxy-lh4bq\" (UID: \"ad51975b-c6bc-4708-8988-004224379e4e\") " pod="kube-system/kube-proxy-lh4bq"
	Jun 17 11:34:21 multinode-353869 kubelet[3102]: I0617 11:34:21.943017    3102 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 17 11:35:07 multinode-353869 kubelet[3102]: E0617 11:35:07.875011    3102 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 11:35:07 multinode-353869 kubelet[3102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 11:35:07 multinode-353869 kubelet[3102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 11:35:07 multinode-353869 kubelet[3102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:35:07 multinode-353869 kubelet[3102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 11:36:07 multinode-353869 kubelet[3102]: E0617 11:36:07.875677    3102 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 11:36:07 multinode-353869 kubelet[3102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 11:36:07 multinode-353869 kubelet[3102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 11:36:07 multinode-353869 kubelet[3102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:36:07 multinode-353869 kubelet[3102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 11:37:07 multinode-353869 kubelet[3102]: E0617 11:37:07.877933    3102 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 11:37:07 multinode-353869 kubelet[3102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 11:37:07 multinode-353869 kubelet[3102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 11:37:07 multinode-353869 kubelet[3102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 11:37:07 multinode-353869 kubelet[3102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:37:50.487102  150586 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19084-112967/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-353869 -n multinode-353869
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-353869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.35s)

                                                
                                    
x
+
TestPreload (168.47s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-392702 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0617 11:41:51.169513  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-392702 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m37.397291061s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-392702 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-392702 image pull gcr.io/k8s-minikube/busybox: (1.037670204s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-392702
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-392702: (7.282727392s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-392702 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0617 11:43:57.398039  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-392702 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (59.703537064s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-392702 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-06-17 11:44:07.36255744 +0000 UTC m=+3593.206115593
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-392702 -n test-preload-392702
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-392702 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-392702 logs -n 25: (1.087130906s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-353869 ssh -n                                                                 | multinode-353869     | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n multinode-353869 sudo cat                                       | multinode-353869     | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | /home/docker/cp-test_multinode-353869-m03_multinode-353869.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-353869 cp multinode-353869-m03:/home/docker/cp-test.txt                       | multinode-353869     | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m02:/home/docker/cp-test_multinode-353869-m03_multinode-353869-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n                                                                 | multinode-353869     | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | multinode-353869-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-353869 ssh -n multinode-353869-m02 sudo cat                                   | multinode-353869     | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	|         | /home/docker/cp-test_multinode-353869-m03_multinode-353869-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-353869 node stop m03                                                          | multinode-353869     | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:29 UTC |
	| node    | multinode-353869 node start                                                             | multinode-353869     | jenkins | v1.33.1 | 17 Jun 24 11:29 UTC | 17 Jun 24 11:30 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-353869                                                                | multinode-353869     | jenkins | v1.33.1 | 17 Jun 24 11:30 UTC |                     |
	| stop    | -p multinode-353869                                                                     | multinode-353869     | jenkins | v1.33.1 | 17 Jun 24 11:30 UTC |                     |
	| start   | -p multinode-353869                                                                     | multinode-353869     | jenkins | v1.33.1 | 17 Jun 24 11:32 UTC | 17 Jun 24 11:35 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-353869                                                                | multinode-353869     | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC |                     |
	| node    | multinode-353869 node delete                                                            | multinode-353869     | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC | 17 Jun 24 11:35 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-353869 stop                                                                   | multinode-353869     | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC |                     |
	| start   | -p multinode-353869                                                                     | multinode-353869     | jenkins | v1.33.1 | 17 Jun 24 11:37 UTC | 17 Jun 24 11:40 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-353869                                                                | multinode-353869     | jenkins | v1.33.1 | 17 Jun 24 11:40 UTC |                     |
	| start   | -p multinode-353869-m02                                                                 | multinode-353869-m02 | jenkins | v1.33.1 | 17 Jun 24 11:40 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-353869-m03                                                                 | multinode-353869-m03 | jenkins | v1.33.1 | 17 Jun 24 11:40 UTC | 17 Jun 24 11:41 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-353869                                                                 | multinode-353869     | jenkins | v1.33.1 | 17 Jun 24 11:41 UTC |                     |
	| delete  | -p multinode-353869-m03                                                                 | multinode-353869-m03 | jenkins | v1.33.1 | 17 Jun 24 11:41 UTC | 17 Jun 24 11:41 UTC |
	| delete  | -p multinode-353869                                                                     | multinode-353869     | jenkins | v1.33.1 | 17 Jun 24 11:41 UTC | 17 Jun 24 11:41 UTC |
	| start   | -p test-preload-392702                                                                  | test-preload-392702  | jenkins | v1.33.1 | 17 Jun 24 11:41 UTC | 17 Jun 24 11:42 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-392702 image pull                                                          | test-preload-392702  | jenkins | v1.33.1 | 17 Jun 24 11:42 UTC | 17 Jun 24 11:43 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-392702                                                                  | test-preload-392702  | jenkins | v1.33.1 | 17 Jun 24 11:43 UTC | 17 Jun 24 11:43 UTC |
	| start   | -p test-preload-392702                                                                  | test-preload-392702  | jenkins | v1.33.1 | 17 Jun 24 11:43 UTC | 17 Jun 24 11:44 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-392702 image list                                                          | test-preload-392702  | jenkins | v1.33.1 | 17 Jun 24 11:44 UTC | 17 Jun 24 11:44 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 11:43:07
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 11:43:07.485564  152958 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:43:07.485838  152958 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:43:07.485851  152958 out.go:304] Setting ErrFile to fd 2...
	I0617 11:43:07.485856  152958 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:43:07.486014  152958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:43:07.486497  152958 out.go:298] Setting JSON to false
	I0617 11:43:07.487364  152958 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5134,"bootTime":1718619453,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:43:07.487415  152958 start.go:139] virtualization: kvm guest
	I0617 11:43:07.489733  152958 out.go:177] * [test-preload-392702] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:43:07.491129  152958 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:43:07.492199  152958 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:43:07.491165  152958 notify.go:220] Checking for updates...
	I0617 11:43:07.494392  152958 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:43:07.495561  152958 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:43:07.496761  152958 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:43:07.498122  152958 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:43:07.499785  152958 config.go:182] Loaded profile config "test-preload-392702": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0617 11:43:07.500432  152958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:43:07.500494  152958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:43:07.514957  152958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36039
	I0617 11:43:07.515371  152958 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:43:07.515920  152958 main.go:141] libmachine: Using API Version  1
	I0617 11:43:07.515960  152958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:43:07.516246  152958 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:43:07.516422  152958 main.go:141] libmachine: (test-preload-392702) Calling .DriverName
	I0617 11:43:07.518107  152958 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0617 11:43:07.519034  152958 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:43:07.519301  152958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:43:07.519337  152958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:43:07.533250  152958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37855
	I0617 11:43:07.533600  152958 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:43:07.533944  152958 main.go:141] libmachine: Using API Version  1
	I0617 11:43:07.533964  152958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:43:07.534270  152958 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:43:07.534445  152958 main.go:141] libmachine: (test-preload-392702) Calling .DriverName
	I0617 11:43:07.566540  152958 out.go:177] * Using the kvm2 driver based on existing profile
	I0617 11:43:07.567735  152958 start.go:297] selected driver: kvm2
	I0617 11:43:07.567745  152958 start.go:901] validating driver "kvm2" against &{Name:test-preload-392702 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-392702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:43:07.567841  152958 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:43:07.568472  152958 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:43:07.568531  152958 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 11:43:07.582022  152958 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 11:43:07.582324  152958 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:43:07.582376  152958 cni.go:84] Creating CNI manager for ""
	I0617 11:43:07.582389  152958 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:43:07.582436  152958 start.go:340] cluster config:
	{Name:test-preload-392702 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-392702 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:43:07.582520  152958 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:43:07.584710  152958 out.go:177] * Starting "test-preload-392702" primary control-plane node in "test-preload-392702" cluster
	I0617 11:43:07.585815  152958 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0617 11:43:07.614751  152958 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0617 11:43:07.614773  152958 cache.go:56] Caching tarball of preloaded images
	I0617 11:43:07.614900  152958 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0617 11:43:07.616489  152958 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0617 11:43:07.617692  152958 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0617 11:43:07.641880  152958 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0617 11:43:10.785581  152958 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0617 11:43:10.785706  152958 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0617 11:43:11.634325  152958 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0617 11:43:11.634476  152958 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/test-preload-392702/config.json ...
	I0617 11:43:11.634718  152958 start.go:360] acquireMachinesLock for test-preload-392702: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:43:11.634811  152958 start.go:364] duration metric: took 62.496µs to acquireMachinesLock for "test-preload-392702"
	I0617 11:43:11.634830  152958 start.go:96] Skipping create...Using existing machine configuration
	I0617 11:43:11.634839  152958 fix.go:54] fixHost starting: 
	I0617 11:43:11.635183  152958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:43:11.635233  152958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:43:11.649763  152958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33615
	I0617 11:43:11.650203  152958 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:43:11.650615  152958 main.go:141] libmachine: Using API Version  1
	I0617 11:43:11.650638  152958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:43:11.650955  152958 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:43:11.651161  152958 main.go:141] libmachine: (test-preload-392702) Calling .DriverName
	I0617 11:43:11.651309  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetState
	I0617 11:43:11.652933  152958 fix.go:112] recreateIfNeeded on test-preload-392702: state=Stopped err=<nil>
	I0617 11:43:11.652966  152958 main.go:141] libmachine: (test-preload-392702) Calling .DriverName
	W0617 11:43:11.653134  152958 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 11:43:11.655904  152958 out.go:177] * Restarting existing kvm2 VM for "test-preload-392702" ...
	I0617 11:43:11.657104  152958 main.go:141] libmachine: (test-preload-392702) Calling .Start
	I0617 11:43:11.657262  152958 main.go:141] libmachine: (test-preload-392702) Ensuring networks are active...
	I0617 11:43:11.658013  152958 main.go:141] libmachine: (test-preload-392702) Ensuring network default is active
	I0617 11:43:11.658317  152958 main.go:141] libmachine: (test-preload-392702) Ensuring network mk-test-preload-392702 is active
	I0617 11:43:11.658639  152958 main.go:141] libmachine: (test-preload-392702) Getting domain xml...
	I0617 11:43:11.659313  152958 main.go:141] libmachine: (test-preload-392702) Creating domain...
	I0617 11:43:12.826026  152958 main.go:141] libmachine: (test-preload-392702) Waiting to get IP...
	I0617 11:43:12.826836  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:12.827195  152958 main.go:141] libmachine: (test-preload-392702) DBG | unable to find current IP address of domain test-preload-392702 in network mk-test-preload-392702
	I0617 11:43:12.827282  152958 main.go:141] libmachine: (test-preload-392702) DBG | I0617 11:43:12.827176  153009 retry.go:31] will retry after 229.151553ms: waiting for machine to come up
	I0617 11:43:13.057709  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:13.058147  152958 main.go:141] libmachine: (test-preload-392702) DBG | unable to find current IP address of domain test-preload-392702 in network mk-test-preload-392702
	I0617 11:43:13.058177  152958 main.go:141] libmachine: (test-preload-392702) DBG | I0617 11:43:13.058110  153009 retry.go:31] will retry after 356.430557ms: waiting for machine to come up
	I0617 11:43:13.415688  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:13.416113  152958 main.go:141] libmachine: (test-preload-392702) DBG | unable to find current IP address of domain test-preload-392702 in network mk-test-preload-392702
	I0617 11:43:13.416142  152958 main.go:141] libmachine: (test-preload-392702) DBG | I0617 11:43:13.416051  153009 retry.go:31] will retry after 301.931879ms: waiting for machine to come up
	I0617 11:43:13.719585  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:13.720014  152958 main.go:141] libmachine: (test-preload-392702) DBG | unable to find current IP address of domain test-preload-392702 in network mk-test-preload-392702
	I0617 11:43:13.720038  152958 main.go:141] libmachine: (test-preload-392702) DBG | I0617 11:43:13.719961  153009 retry.go:31] will retry after 503.506541ms: waiting for machine to come up
	I0617 11:43:14.224497  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:14.224891  152958 main.go:141] libmachine: (test-preload-392702) DBG | unable to find current IP address of domain test-preload-392702 in network mk-test-preload-392702
	I0617 11:43:14.224919  152958 main.go:141] libmachine: (test-preload-392702) DBG | I0617 11:43:14.224840  153009 retry.go:31] will retry after 539.440218ms: waiting for machine to come up
	I0617 11:43:14.765488  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:14.765932  152958 main.go:141] libmachine: (test-preload-392702) DBG | unable to find current IP address of domain test-preload-392702 in network mk-test-preload-392702
	I0617 11:43:14.765963  152958 main.go:141] libmachine: (test-preload-392702) DBG | I0617 11:43:14.765881  153009 retry.go:31] will retry after 793.987496ms: waiting for machine to come up
	I0617 11:43:15.561643  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:15.562055  152958 main.go:141] libmachine: (test-preload-392702) DBG | unable to find current IP address of domain test-preload-392702 in network mk-test-preload-392702
	I0617 11:43:15.562093  152958 main.go:141] libmachine: (test-preload-392702) DBG | I0617 11:43:15.562009  153009 retry.go:31] will retry after 1.025195914s: waiting for machine to come up
	I0617 11:43:16.589209  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:16.589621  152958 main.go:141] libmachine: (test-preload-392702) DBG | unable to find current IP address of domain test-preload-392702 in network mk-test-preload-392702
	I0617 11:43:16.589648  152958 main.go:141] libmachine: (test-preload-392702) DBG | I0617 11:43:16.589572  153009 retry.go:31] will retry after 1.216853114s: waiting for machine to come up
	I0617 11:43:17.808274  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:17.808720  152958 main.go:141] libmachine: (test-preload-392702) DBG | unable to find current IP address of domain test-preload-392702 in network mk-test-preload-392702
	I0617 11:43:17.808748  152958 main.go:141] libmachine: (test-preload-392702) DBG | I0617 11:43:17.808677  153009 retry.go:31] will retry after 1.267688321s: waiting for machine to come up
	I0617 11:43:19.078031  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:19.078399  152958 main.go:141] libmachine: (test-preload-392702) DBG | unable to find current IP address of domain test-preload-392702 in network mk-test-preload-392702
	I0617 11:43:19.078425  152958 main.go:141] libmachine: (test-preload-392702) DBG | I0617 11:43:19.078351  153009 retry.go:31] will retry after 2.083357824s: waiting for machine to come up
	I0617 11:43:21.164669  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:21.165192  152958 main.go:141] libmachine: (test-preload-392702) DBG | unable to find current IP address of domain test-preload-392702 in network mk-test-preload-392702
	I0617 11:43:21.165219  152958 main.go:141] libmachine: (test-preload-392702) DBG | I0617 11:43:21.165157  153009 retry.go:31] will retry after 2.84660292s: waiting for machine to come up
	I0617 11:43:24.014348  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:24.014800  152958 main.go:141] libmachine: (test-preload-392702) DBG | unable to find current IP address of domain test-preload-392702 in network mk-test-preload-392702
	I0617 11:43:24.014832  152958 main.go:141] libmachine: (test-preload-392702) DBG | I0617 11:43:24.014765  153009 retry.go:31] will retry after 2.523986254s: waiting for machine to come up
	I0617 11:43:26.541383  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:26.541741  152958 main.go:141] libmachine: (test-preload-392702) DBG | unable to find current IP address of domain test-preload-392702 in network mk-test-preload-392702
	I0617 11:43:26.541809  152958 main.go:141] libmachine: (test-preload-392702) DBG | I0617 11:43:26.541720  153009 retry.go:31] will retry after 2.765018266s: waiting for machine to come up
	I0617 11:43:29.309887  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:29.310432  152958 main.go:141] libmachine: (test-preload-392702) Found IP for machine: 192.168.39.217
	I0617 11:43:29.310463  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has current primary IP address 192.168.39.217 and MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:29.310474  152958 main.go:141] libmachine: (test-preload-392702) Reserving static IP address...
	I0617 11:43:29.310837  152958 main.go:141] libmachine: (test-preload-392702) Reserved static IP address: 192.168.39.217
	I0617 11:43:29.310866  152958 main.go:141] libmachine: (test-preload-392702) DBG | found host DHCP lease matching {name: "test-preload-392702", mac: "52:54:00:ba:ab:36", ip: "192.168.39.217"} in network mk-test-preload-392702: {Iface:virbr1 ExpiryTime:2024-06-17 12:41:36 +0000 UTC Type:0 Mac:52:54:00:ba:ab:36 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-392702 Clientid:01:52:54:00:ba:ab:36}
	I0617 11:43:29.310875  152958 main.go:141] libmachine: (test-preload-392702) Waiting for SSH to be available...
	I0617 11:43:29.310894  152958 main.go:141] libmachine: (test-preload-392702) DBG | skip adding static IP to network mk-test-preload-392702 - found existing host DHCP lease matching {name: "test-preload-392702", mac: "52:54:00:ba:ab:36", ip: "192.168.39.217"}
	I0617 11:43:29.310908  152958 main.go:141] libmachine: (test-preload-392702) DBG | Getting to WaitForSSH function...
	I0617 11:43:29.312910  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:29.313257  152958 main.go:141] libmachine: (test-preload-392702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:ab:36", ip: ""} in network mk-test-preload-392702: {Iface:virbr1 ExpiryTime:2024-06-17 12:41:36 +0000 UTC Type:0 Mac:52:54:00:ba:ab:36 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-392702 Clientid:01:52:54:00:ba:ab:36}
	I0617 11:43:29.313292  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined IP address 192.168.39.217 and MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:29.313377  152958 main.go:141] libmachine: (test-preload-392702) DBG | Using SSH client type: external
	I0617 11:43:29.313394  152958 main.go:141] libmachine: (test-preload-392702) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/test-preload-392702/id_rsa (-rw-------)
	I0617 11:43:29.313426  152958 main.go:141] libmachine: (test-preload-392702) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/test-preload-392702/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 11:43:29.313440  152958 main.go:141] libmachine: (test-preload-392702) DBG | About to run SSH command:
	I0617 11:43:29.313455  152958 main.go:141] libmachine: (test-preload-392702) DBG | exit 0
	I0617 11:43:29.435229  152958 main.go:141] libmachine: (test-preload-392702) DBG | SSH cmd err, output: <nil>: 
	I0617 11:43:29.435571  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetConfigRaw
	I0617 11:43:29.436396  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetIP
	I0617 11:43:29.438889  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:29.439201  152958 main.go:141] libmachine: (test-preload-392702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:ab:36", ip: ""} in network mk-test-preload-392702: {Iface:virbr1 ExpiryTime:2024-06-17 12:41:36 +0000 UTC Type:0 Mac:52:54:00:ba:ab:36 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-392702 Clientid:01:52:54:00:ba:ab:36}
	I0617 11:43:29.439239  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined IP address 192.168.39.217 and MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:29.439598  152958 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/test-preload-392702/config.json ...
	I0617 11:43:29.439837  152958 machine.go:94] provisionDockerMachine start ...
	I0617 11:43:29.439859  152958 main.go:141] libmachine: (test-preload-392702) Calling .DriverName
	I0617 11:43:29.440086  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHHostname
	I0617 11:43:29.442393  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:29.442686  152958 main.go:141] libmachine: (test-preload-392702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:ab:36", ip: ""} in network mk-test-preload-392702: {Iface:virbr1 ExpiryTime:2024-06-17 12:41:36 +0000 UTC Type:0 Mac:52:54:00:ba:ab:36 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-392702 Clientid:01:52:54:00:ba:ab:36}
	I0617 11:43:29.442710  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined IP address 192.168.39.217 and MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:29.442888  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHPort
	I0617 11:43:29.443093  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHKeyPath
	I0617 11:43:29.443222  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHKeyPath
	I0617 11:43:29.443361  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHUsername
	I0617 11:43:29.443524  152958 main.go:141] libmachine: Using SSH client type: native
	I0617 11:43:29.443747  152958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0617 11:43:29.443760  152958 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 11:43:29.548104  152958 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 11:43:29.548138  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetMachineName
	I0617 11:43:29.548416  152958 buildroot.go:166] provisioning hostname "test-preload-392702"
	I0617 11:43:29.548466  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetMachineName
	I0617 11:43:29.548652  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHHostname
	I0617 11:43:29.551122  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:29.551484  152958 main.go:141] libmachine: (test-preload-392702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:ab:36", ip: ""} in network mk-test-preload-392702: {Iface:virbr1 ExpiryTime:2024-06-17 12:41:36 +0000 UTC Type:0 Mac:52:54:00:ba:ab:36 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-392702 Clientid:01:52:54:00:ba:ab:36}
	I0617 11:43:29.551516  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined IP address 192.168.39.217 and MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:29.551667  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHPort
	I0617 11:43:29.551861  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHKeyPath
	I0617 11:43:29.552024  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHKeyPath
	I0617 11:43:29.552190  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHUsername
	I0617 11:43:29.552464  152958 main.go:141] libmachine: Using SSH client type: native
	I0617 11:43:29.552675  152958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0617 11:43:29.552689  152958 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-392702 && echo "test-preload-392702" | sudo tee /etc/hostname
	I0617 11:43:29.674519  152958 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-392702
	
	I0617 11:43:29.674561  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHHostname
	I0617 11:43:29.677120  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:29.677477  152958 main.go:141] libmachine: (test-preload-392702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:ab:36", ip: ""} in network mk-test-preload-392702: {Iface:virbr1 ExpiryTime:2024-06-17 12:41:36 +0000 UTC Type:0 Mac:52:54:00:ba:ab:36 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-392702 Clientid:01:52:54:00:ba:ab:36}
	I0617 11:43:29.677514  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined IP address 192.168.39.217 and MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:29.677636  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHPort
	I0617 11:43:29.677940  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHKeyPath
	I0617 11:43:29.678126  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHKeyPath
	I0617 11:43:29.678260  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHUsername
	I0617 11:43:29.678437  152958 main.go:141] libmachine: Using SSH client type: native
	I0617 11:43:29.678605  152958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0617 11:43:29.678621  152958 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-392702' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-392702/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-392702' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 11:43:29.793115  152958 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:43:29.793157  152958 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 11:43:29.793187  152958 buildroot.go:174] setting up certificates
	I0617 11:43:29.793201  152958 provision.go:84] configureAuth start
	I0617 11:43:29.793215  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetMachineName
	I0617 11:43:29.793516  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetIP
	I0617 11:43:29.796086  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:29.796454  152958 main.go:141] libmachine: (test-preload-392702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:ab:36", ip: ""} in network mk-test-preload-392702: {Iface:virbr1 ExpiryTime:2024-06-17 12:41:36 +0000 UTC Type:0 Mac:52:54:00:ba:ab:36 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-392702 Clientid:01:52:54:00:ba:ab:36}
	I0617 11:43:29.796488  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined IP address 192.168.39.217 and MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:29.796603  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHHostname
	I0617 11:43:29.798737  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:29.799138  152958 main.go:141] libmachine: (test-preload-392702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:ab:36", ip: ""} in network mk-test-preload-392702: {Iface:virbr1 ExpiryTime:2024-06-17 12:41:36 +0000 UTC Type:0 Mac:52:54:00:ba:ab:36 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-392702 Clientid:01:52:54:00:ba:ab:36}
	I0617 11:43:29.799166  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined IP address 192.168.39.217 and MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:29.799299  152958 provision.go:143] copyHostCerts
	I0617 11:43:29.799375  152958 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 11:43:29.799390  152958 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:43:29.799494  152958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 11:43:29.799605  152958 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 11:43:29.799617  152958 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:43:29.799651  152958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 11:43:29.799723  152958 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 11:43:29.799732  152958 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:43:29.799763  152958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 11:43:29.799825  152958 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.test-preload-392702 san=[127.0.0.1 192.168.39.217 localhost minikube test-preload-392702]
	I0617 11:43:29.971511  152958 provision.go:177] copyRemoteCerts
	I0617 11:43:29.971568  152958 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 11:43:29.971597  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHHostname
	I0617 11:43:29.974059  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:29.974366  152958 main.go:141] libmachine: (test-preload-392702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:ab:36", ip: ""} in network mk-test-preload-392702: {Iface:virbr1 ExpiryTime:2024-06-17 12:41:36 +0000 UTC Type:0 Mac:52:54:00:ba:ab:36 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-392702 Clientid:01:52:54:00:ba:ab:36}
	I0617 11:43:29.974402  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined IP address 192.168.39.217 and MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:29.974611  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHPort
	I0617 11:43:29.974777  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHKeyPath
	I0617 11:43:29.974893  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHUsername
	I0617 11:43:29.974993  152958 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/test-preload-392702/id_rsa Username:docker}
	I0617 11:43:30.057616  152958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 11:43:30.082080  152958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 11:43:30.105144  152958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0617 11:43:30.128333  152958 provision.go:87] duration metric: took 335.111428ms to configureAuth
	I0617 11:43:30.128363  152958 buildroot.go:189] setting minikube options for container-runtime
	I0617 11:43:30.128529  152958 config.go:182] Loaded profile config "test-preload-392702": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0617 11:43:30.128598  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHHostname
	I0617 11:43:30.131284  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:30.131651  152958 main.go:141] libmachine: (test-preload-392702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:ab:36", ip: ""} in network mk-test-preload-392702: {Iface:virbr1 ExpiryTime:2024-06-17 12:41:36 +0000 UTC Type:0 Mac:52:54:00:ba:ab:36 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-392702 Clientid:01:52:54:00:ba:ab:36}
	I0617 11:43:30.131675  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined IP address 192.168.39.217 and MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:30.131891  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHPort
	I0617 11:43:30.132217  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHKeyPath
	I0617 11:43:30.132370  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHKeyPath
	I0617 11:43:30.132474  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHUsername
	I0617 11:43:30.132598  152958 main.go:141] libmachine: Using SSH client type: native
	I0617 11:43:30.132774  152958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0617 11:43:30.132796  152958 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 11:43:30.419243  152958 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 11:43:30.419270  152958 machine.go:97] duration metric: took 979.418253ms to provisionDockerMachine
	I0617 11:43:30.419282  152958 start.go:293] postStartSetup for "test-preload-392702" (driver="kvm2")
	I0617 11:43:30.419292  152958 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 11:43:30.419309  152958 main.go:141] libmachine: (test-preload-392702) Calling .DriverName
	I0617 11:43:30.419645  152958 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 11:43:30.419669  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHHostname
	I0617 11:43:30.422149  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:30.422594  152958 main.go:141] libmachine: (test-preload-392702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:ab:36", ip: ""} in network mk-test-preload-392702: {Iface:virbr1 ExpiryTime:2024-06-17 12:41:36 +0000 UTC Type:0 Mac:52:54:00:ba:ab:36 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-392702 Clientid:01:52:54:00:ba:ab:36}
	I0617 11:43:30.422626  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined IP address 192.168.39.217 and MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:30.422729  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHPort
	I0617 11:43:30.422932  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHKeyPath
	I0617 11:43:30.423104  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHUsername
	I0617 11:43:30.423260  152958 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/test-preload-392702/id_rsa Username:docker}
	I0617 11:43:30.505952  152958 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 11:43:30.510382  152958 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 11:43:30.510404  152958 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 11:43:30.510465  152958 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 11:43:30.510533  152958 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 11:43:30.510615  152958 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 11:43:30.519974  152958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:43:30.546269  152958 start.go:296] duration metric: took 126.972504ms for postStartSetup
	I0617 11:43:30.546310  152958 fix.go:56] duration metric: took 18.911472922s for fixHost
	I0617 11:43:30.546331  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHHostname
	I0617 11:43:30.548656  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:30.548991  152958 main.go:141] libmachine: (test-preload-392702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:ab:36", ip: ""} in network mk-test-preload-392702: {Iface:virbr1 ExpiryTime:2024-06-17 12:41:36 +0000 UTC Type:0 Mac:52:54:00:ba:ab:36 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-392702 Clientid:01:52:54:00:ba:ab:36}
	I0617 11:43:30.549030  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined IP address 192.168.39.217 and MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:30.549138  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHPort
	I0617 11:43:30.549342  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHKeyPath
	I0617 11:43:30.549480  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHKeyPath
	I0617 11:43:30.549608  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHUsername
	I0617 11:43:30.549731  152958 main.go:141] libmachine: Using SSH client type: native
	I0617 11:43:30.549912  152958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0617 11:43:30.549924  152958 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 11:43:30.656283  152958 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718624610.631029516
	
	I0617 11:43:30.656307  152958 fix.go:216] guest clock: 1718624610.631029516
	I0617 11:43:30.656317  152958 fix.go:229] Guest: 2024-06-17 11:43:30.631029516 +0000 UTC Remote: 2024-06-17 11:43:30.546313902 +0000 UTC m=+23.094882355 (delta=84.715614ms)
	I0617 11:43:30.656360  152958 fix.go:200] guest clock delta is within tolerance: 84.715614ms
	I0617 11:43:30.656370  152958 start.go:83] releasing machines lock for "test-preload-392702", held for 19.021548082s
	I0617 11:43:30.656399  152958 main.go:141] libmachine: (test-preload-392702) Calling .DriverName
	I0617 11:43:30.656696  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetIP
	I0617 11:43:30.659202  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:30.659595  152958 main.go:141] libmachine: (test-preload-392702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:ab:36", ip: ""} in network mk-test-preload-392702: {Iface:virbr1 ExpiryTime:2024-06-17 12:41:36 +0000 UTC Type:0 Mac:52:54:00:ba:ab:36 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-392702 Clientid:01:52:54:00:ba:ab:36}
	I0617 11:43:30.659630  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined IP address 192.168.39.217 and MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:30.659745  152958 main.go:141] libmachine: (test-preload-392702) Calling .DriverName
	I0617 11:43:30.660208  152958 main.go:141] libmachine: (test-preload-392702) Calling .DriverName
	I0617 11:43:30.660368  152958 main.go:141] libmachine: (test-preload-392702) Calling .DriverName
	I0617 11:43:30.660468  152958 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 11:43:30.660523  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHHostname
	I0617 11:43:30.660598  152958 ssh_runner.go:195] Run: cat /version.json
	I0617 11:43:30.660623  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHHostname
	I0617 11:43:30.663012  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:30.663313  152958 main.go:141] libmachine: (test-preload-392702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:ab:36", ip: ""} in network mk-test-preload-392702: {Iface:virbr1 ExpiryTime:2024-06-17 12:41:36 +0000 UTC Type:0 Mac:52:54:00:ba:ab:36 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-392702 Clientid:01:52:54:00:ba:ab:36}
	I0617 11:43:30.663343  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined IP address 192.168.39.217 and MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:30.663361  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:30.663527  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHPort
	I0617 11:43:30.663712  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHKeyPath
	I0617 11:43:30.663780  152958 main.go:141] libmachine: (test-preload-392702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:ab:36", ip: ""} in network mk-test-preload-392702: {Iface:virbr1 ExpiryTime:2024-06-17 12:41:36 +0000 UTC Type:0 Mac:52:54:00:ba:ab:36 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-392702 Clientid:01:52:54:00:ba:ab:36}
	I0617 11:43:30.663806  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined IP address 192.168.39.217 and MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:30.663886  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHUsername
	I0617 11:43:30.663946  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHPort
	I0617 11:43:30.664045  152958 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/test-preload-392702/id_rsa Username:docker}
	I0617 11:43:30.664072  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHKeyPath
	I0617 11:43:30.664171  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHUsername
	I0617 11:43:30.664308  152958 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/test-preload-392702/id_rsa Username:docker}
	I0617 11:43:30.741148  152958 ssh_runner.go:195] Run: systemctl --version
	I0617 11:43:30.766127  152958 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 11:43:30.909462  152958 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 11:43:30.916425  152958 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 11:43:30.916494  152958 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 11:43:30.932404  152958 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 11:43:30.932433  152958 start.go:494] detecting cgroup driver to use...
	I0617 11:43:30.932504  152958 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 11:43:30.948305  152958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 11:43:30.961692  152958 docker.go:217] disabling cri-docker service (if available) ...
	I0617 11:43:30.961772  152958 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 11:43:30.974881  152958 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 11:43:30.987901  152958 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 11:43:31.111886  152958 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 11:43:31.246400  152958 docker.go:233] disabling docker service ...
	I0617 11:43:31.246464  152958 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 11:43:31.260682  152958 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 11:43:31.273117  152958 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 11:43:31.412737  152958 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 11:43:31.527570  152958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 11:43:31.540589  152958 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 11:43:31.558228  152958 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0617 11:43:31.558295  152958 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:43:31.568058  152958 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 11:43:31.568110  152958 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:43:31.577884  152958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:43:31.587471  152958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:43:31.597093  152958 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 11:43:31.606960  152958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:43:31.616797  152958 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:43:31.633251  152958 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:43:31.642887  152958 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 11:43:31.651850  152958 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 11:43:31.651895  152958 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 11:43:31.664704  152958 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 11:43:31.673579  152958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:43:31.783508  152958 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 11:43:31.914700  152958 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 11:43:31.914768  152958 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 11:43:31.919545  152958 start.go:562] Will wait 60s for crictl version
	I0617 11:43:31.919590  152958 ssh_runner.go:195] Run: which crictl
	I0617 11:43:31.923229  152958 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 11:43:31.963439  152958 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 11:43:31.963529  152958 ssh_runner.go:195] Run: crio --version
	I0617 11:43:31.993481  152958 ssh_runner.go:195] Run: crio --version
	I0617 11:43:32.024544  152958 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0617 11:43:32.025835  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetIP
	I0617 11:43:32.028722  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:32.029098  152958 main.go:141] libmachine: (test-preload-392702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:ab:36", ip: ""} in network mk-test-preload-392702: {Iface:virbr1 ExpiryTime:2024-06-17 12:41:36 +0000 UTC Type:0 Mac:52:54:00:ba:ab:36 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-392702 Clientid:01:52:54:00:ba:ab:36}
	I0617 11:43:32.029124  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined IP address 192.168.39.217 and MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:32.029280  152958 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 11:43:32.033547  152958 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:43:32.046958  152958 kubeadm.go:877] updating cluster {Name:test-preload-392702 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-392702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 11:43:32.047089  152958 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0617 11:43:32.047131  152958 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:43:32.087741  152958 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0617 11:43:32.087809  152958 ssh_runner.go:195] Run: which lz4
	I0617 11:43:32.091921  152958 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 11:43:32.096248  152958 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 11:43:32.096281  152958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0617 11:43:33.640679  152958 crio.go:462] duration metric: took 1.548784741s to copy over tarball
	I0617 11:43:33.640773  152958 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 11:43:35.960918  152958 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.320104118s)
	I0617 11:43:35.960955  152958 crio.go:469] duration metric: took 2.320233339s to extract the tarball
	I0617 11:43:35.960966  152958 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 11:43:36.002245  152958 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:43:36.048326  152958 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0617 11:43:36.048366  152958 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 11:43:36.048453  152958 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 11:43:36.048477  152958 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0617 11:43:36.048507  152958 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0617 11:43:36.048521  152958 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0617 11:43:36.048542  152958 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0617 11:43:36.048577  152958 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0617 11:43:36.048486  152958 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0617 11:43:36.048610  152958 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0617 11:43:36.050100  152958 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0617 11:43:36.050113  152958 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0617 11:43:36.050102  152958 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0617 11:43:36.050194  152958 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0617 11:43:36.050206  152958 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0617 11:43:36.050214  152958 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0617 11:43:36.050100  152958 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0617 11:43:36.050203  152958 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 11:43:36.201645  152958 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0617 11:43:36.207120  152958 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0617 11:43:36.212938  152958 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0617 11:43:36.229024  152958 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0617 11:43:36.235462  152958 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0617 11:43:36.253857  152958 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0617 11:43:36.265392  152958 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0617 11:43:36.272017  152958 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0617 11:43:36.272061  152958 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0617 11:43:36.272103  152958 ssh_runner.go:195] Run: which crictl
	I0617 11:43:36.297700  152958 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 11:43:36.312942  152958 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0617 11:43:36.312981  152958 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0617 11:43:36.313015  152958 ssh_runner.go:195] Run: which crictl
	I0617 11:43:36.375678  152958 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0617 11:43:36.375722  152958 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0617 11:43:36.375769  152958 ssh_runner.go:195] Run: which crictl
	I0617 11:43:36.375784  152958 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0617 11:43:36.375822  152958 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0617 11:43:36.375866  152958 ssh_runner.go:195] Run: which crictl
	I0617 11:43:36.403397  152958 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0617 11:43:36.403436  152958 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0617 11:43:36.403441  152958 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0617 11:43:36.403453  152958 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0617 11:43:36.403497  152958 ssh_runner.go:195] Run: which crictl
	I0617 11:43:36.403497  152958 ssh_runner.go:195] Run: which crictl
	I0617 11:43:36.425971  152958 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0617 11:43:36.426021  152958 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0617 11:43:36.426050  152958 ssh_runner.go:195] Run: which crictl
	I0617 11:43:36.426046  152958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0617 11:43:36.437360  152958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0617 11:43:36.437404  152958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0617 11:43:36.437441  152958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0617 11:43:36.437495  152958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0617 11:43:36.437552  152958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0617 11:43:36.526058  152958 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0617 11:43:36.526155  152958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0617 11:43:36.526157  152958 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0617 11:43:36.583310  152958 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0617 11:43:36.583442  152958 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0617 11:43:36.591942  152958 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0617 11:43:36.592010  152958 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0617 11:43:36.592032  152958 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0617 11:43:36.592073  152958 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0617 11:43:36.592116  152958 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0617 11:43:36.592148  152958 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0617 11:43:36.592155  152958 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0617 11:43:36.592206  152958 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0617 11:43:36.592211  152958 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0617 11:43:36.592250  152958 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0617 11:43:36.592261  152958 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0617 11:43:36.592263  152958 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0617 11:43:36.592295  152958 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0617 11:43:36.594938  152958 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0617 11:43:36.606072  152958 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0617 11:43:36.606137  152958 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0617 11:43:36.606190  152958 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0617 11:43:36.606322  152958 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0617 11:43:39.263739  152958 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.671411699s)
	I0617 11:43:39.263780  152958 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0617 11:43:39.263810  152958 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0617 11:43:39.263883  152958 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0617 11:43:39.263810  152958 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.671530408s)
	I0617 11:43:39.263937  152958 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0617 11:43:40.107341  152958 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0617 11:43:40.107400  152958 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0617 11:43:40.107501  152958 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0617 11:43:40.448701  152958 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0617 11:43:40.448751  152958 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0617 11:43:40.448820  152958 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0617 11:43:42.700875  152958 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.252025565s)
	I0617 11:43:42.700915  152958 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0617 11:43:42.700951  152958 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0617 11:43:42.701040  152958 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0617 11:43:43.451613  152958 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0617 11:43:43.451665  152958 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0617 11:43:43.451728  152958 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0617 11:43:44.206725  152958 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0617 11:43:44.206779  152958 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0617 11:43:44.206832  152958 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0617 11:43:44.651359  152958 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0617 11:43:44.651416  152958 cache_images.go:123] Successfully loaded all cached images
	I0617 11:43:44.651423  152958 cache_images.go:92] duration metric: took 8.603041964s to LoadCachedImages
	I0617 11:43:44.651437  152958 kubeadm.go:928] updating node { 192.168.39.217 8443 v1.24.4 crio true true} ...
	I0617 11:43:44.651624  152958 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-392702 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-392702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 11:43:44.651694  152958 ssh_runner.go:195] Run: crio config
	I0617 11:43:44.697035  152958 cni.go:84] Creating CNI manager for ""
	I0617 11:43:44.697061  152958 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:43:44.697086  152958 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 11:43:44.697114  152958 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-392702 NodeName:test-preload-392702 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 11:43:44.697282  152958 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-392702"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 11:43:44.697358  152958 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0617 11:43:44.707641  152958 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 11:43:44.707712  152958 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 11:43:44.717657  152958 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0617 11:43:44.734413  152958 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 11:43:44.751475  152958 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0617 11:43:44.768771  152958 ssh_runner.go:195] Run: grep 192.168.39.217	control-plane.minikube.internal$ /etc/hosts
	I0617 11:43:44.772633  152958 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:43:44.785693  152958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:43:44.913587  152958 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:43:44.930340  152958 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/test-preload-392702 for IP: 192.168.39.217
	I0617 11:43:44.930360  152958 certs.go:194] generating shared ca certs ...
	I0617 11:43:44.930376  152958 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:43:44.930520  152958 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 11:43:44.930556  152958 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 11:43:44.930565  152958 certs.go:256] generating profile certs ...
	I0617 11:43:44.930664  152958 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/test-preload-392702/client.key
	I0617 11:43:44.930741  152958 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/test-preload-392702/apiserver.key.28ca495c
	I0617 11:43:44.930811  152958 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/test-preload-392702/proxy-client.key
	I0617 11:43:44.930945  152958 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 11:43:44.930978  152958 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 11:43:44.930989  152958 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 11:43:44.931012  152958 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 11:43:44.931032  152958 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 11:43:44.931058  152958 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 11:43:44.931094  152958 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:43:44.931867  152958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 11:43:44.964891  152958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 11:43:45.015734  152958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 11:43:45.066878  152958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 11:43:45.110365  152958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/test-preload-392702/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0617 11:43:45.147865  152958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/test-preload-392702/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0617 11:43:45.172237  152958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/test-preload-392702/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 11:43:45.195841  152958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/test-preload-392702/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0617 11:43:45.219720  152958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 11:43:45.242800  152958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 11:43:45.266096  152958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 11:43:45.289010  152958 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 11:43:45.305528  152958 ssh_runner.go:195] Run: openssl version
	I0617 11:43:45.311169  152958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 11:43:45.321766  152958 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 11:43:45.326092  152958 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 11:43:45.326141  152958 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 11:43:45.332031  152958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 11:43:45.342536  152958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 11:43:45.353284  152958 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 11:43:45.357987  152958 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 11:43:45.358042  152958 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 11:43:45.363605  152958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 11:43:45.374258  152958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 11:43:45.384734  152958 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:43:45.389153  152958 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:43:45.389203  152958 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:43:45.394751  152958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 11:43:45.405349  152958 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 11:43:45.409965  152958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 11:43:45.415717  152958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 11:43:45.421535  152958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 11:43:45.427341  152958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 11:43:45.433085  152958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 11:43:45.438792  152958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 11:43:45.444380  152958 kubeadm.go:391] StartCluster: {Name:test-preload-392702 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-392702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:43:45.444453  152958 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 11:43:45.444494  152958 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 11:43:45.483293  152958 cri.go:89] found id: ""
	I0617 11:43:45.483357  152958 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 11:43:45.493657  152958 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 11:43:45.493682  152958 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 11:43:45.493688  152958 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 11:43:45.493738  152958 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 11:43:45.503478  152958 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:43:45.504008  152958 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-392702" does not appear in /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:43:45.504160  152958 kubeconfig.go:62] /home/jenkins/minikube-integration/19084-112967/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-392702" cluster setting kubeconfig missing "test-preload-392702" context setting]
	I0617 11:43:45.504599  152958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:43:45.505353  152958 kapi.go:59] client config for test-preload-392702: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/profiles/test-preload-392702/client.crt", KeyFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/profiles/test-preload-392702/client.key", CAFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfaf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0617 11:43:45.506161  152958 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 11:43:45.516712  152958 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.217
	I0617 11:43:45.516746  152958 kubeadm.go:1154] stopping kube-system containers ...
	I0617 11:43:45.516760  152958 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 11:43:45.516805  152958 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 11:43:45.555609  152958 cri.go:89] found id: ""
	I0617 11:43:45.555699  152958 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 11:43:45.572303  152958 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 11:43:45.582242  152958 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 11:43:45.582258  152958 kubeadm.go:156] found existing configuration files:
	
	I0617 11:43:45.582304  152958 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 11:43:45.591518  152958 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 11:43:45.591572  152958 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 11:43:45.601017  152958 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 11:43:45.610279  152958 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 11:43:45.610334  152958 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 11:43:45.619804  152958 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 11:43:45.628599  152958 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 11:43:45.628650  152958 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 11:43:45.638060  152958 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 11:43:45.647076  152958 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 11:43:45.647135  152958 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 11:43:45.656210  152958 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 11:43:45.665510  152958 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 11:43:45.759701  152958 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 11:43:46.276723  152958 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 11:43:46.535245  152958 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 11:43:46.606355  152958 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 11:43:46.716250  152958 api_server.go:52] waiting for apiserver process to appear ...
	I0617 11:43:46.716326  152958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:43:47.217277  152958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:43:47.717113  152958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:43:47.746024  152958 api_server.go:72] duration metric: took 1.029777174s to wait for apiserver process to appear ...
	I0617 11:43:47.746055  152958 api_server.go:88] waiting for apiserver healthz status ...
	I0617 11:43:47.746073  152958 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0617 11:43:47.746524  152958 api_server.go:269] stopped: https://192.168.39.217:8443/healthz: Get "https://192.168.39.217:8443/healthz": dial tcp 192.168.39.217:8443: connect: connection refused
	I0617 11:43:48.247209  152958 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0617 11:43:52.219843  152958 api_server.go:279] https://192.168.39.217:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 11:43:52.219871  152958 api_server.go:103] status: https://192.168.39.217:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 11:43:52.219884  152958 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0617 11:43:52.238877  152958 api_server.go:279] https://192.168.39.217:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 11:43:52.238904  152958 api_server.go:103] status: https://192.168.39.217:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 11:43:52.247061  152958 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0617 11:43:52.255339  152958 api_server.go:279] https://192.168.39.217:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0617 11:43:52.255365  152958 api_server.go:103] status: https://192.168.39.217:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0617 11:43:52.746206  152958 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0617 11:43:52.753932  152958 api_server.go:279] https://192.168.39.217:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 11:43:52.753963  152958 api_server.go:103] status: https://192.168.39.217:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 11:43:53.246517  152958 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0617 11:43:53.252746  152958 api_server.go:279] https://192.168.39.217:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 11:43:53.252780  152958 api_server.go:103] status: https://192.168.39.217:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 11:43:53.746376  152958 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0617 11:43:53.751717  152958 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0617 11:43:53.760403  152958 api_server.go:141] control plane version: v1.24.4
	I0617 11:43:53.760438  152958 api_server.go:131] duration metric: took 6.014375788s to wait for apiserver health ...
	I0617 11:43:53.760450  152958 cni.go:84] Creating CNI manager for ""
	I0617 11:43:53.760459  152958 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:43:53.762113  152958 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 11:43:53.763341  152958 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 11:43:53.777345  152958 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 11:43:53.824394  152958 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 11:43:53.833636  152958 system_pods.go:59] 8 kube-system pods found
	I0617 11:43:53.833662  152958 system_pods.go:61] "coredns-6d4b75cb6d-5r8lz" [5abe4afb-bdb7-46ba-bf06-44bb41590c98] Running
	I0617 11:43:53.833666  152958 system_pods.go:61] "coredns-6d4b75cb6d-lsnwr" [8f33b34a-5d01-427b-bafb-186fd7b858df] Running
	I0617 11:43:53.833675  152958 system_pods.go:61] "etcd-test-preload-392702" [885c35fd-d93e-4399-b479-8fb09f6871ee] Running
	I0617 11:43:53.833681  152958 system_pods.go:61] "kube-apiserver-test-preload-392702" [12f2ab42-8a27-46eb-a6a1-465d4f6b46a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 11:43:53.833689  152958 system_pods.go:61] "kube-controller-manager-test-preload-392702" [3ac6949f-c518-4778-b92a-ab0dea4d33c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 11:43:53.833697  152958 system_pods.go:61] "kube-proxy-gtw27" [a5335459-09fc-4ca8-baf5-1176a373b395] Running
	I0617 11:43:53.833702  152958 system_pods.go:61] "kube-scheduler-test-preload-392702" [8e423d28-6bda-40df-9c09-c7d15b1b7218] Running
	I0617 11:43:53.833720  152958 system_pods.go:61] "storage-provisioner" [6d9dbe95-91d8-44c6-9616-10f9b4b3e9c0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 11:43:53.833729  152958 system_pods.go:74] duration metric: took 9.312617ms to wait for pod list to return data ...
	I0617 11:43:53.833739  152958 node_conditions.go:102] verifying NodePressure condition ...
	I0617 11:43:53.837095  152958 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 11:43:53.837119  152958 node_conditions.go:123] node cpu capacity is 2
	I0617 11:43:53.837130  152958 node_conditions.go:105] duration metric: took 3.386359ms to run NodePressure ...
	I0617 11:43:53.837150  152958 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 11:43:54.071098  152958 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 11:43:54.086434  152958 kubeadm.go:733] kubelet initialised
	I0617 11:43:54.086457  152958 kubeadm.go:734] duration metric: took 15.332034ms waiting for restarted kubelet to initialise ...
	I0617 11:43:54.086465  152958 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 11:43:54.099287  152958 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-5r8lz" in "kube-system" namespace to be "Ready" ...
	I0617 11:43:54.109977  152958 pod_ready.go:97] node "test-preload-392702" hosting pod "coredns-6d4b75cb6d-5r8lz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392702" has status "Ready":"False"
	I0617 11:43:54.110002  152958 pod_ready.go:81] duration metric: took 10.687399ms for pod "coredns-6d4b75cb6d-5r8lz" in "kube-system" namespace to be "Ready" ...
	E0617 11:43:54.110011  152958 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-392702" hosting pod "coredns-6d4b75cb6d-5r8lz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392702" has status "Ready":"False"
	I0617 11:43:54.110017  152958 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-lsnwr" in "kube-system" namespace to be "Ready" ...
	I0617 11:43:54.117308  152958 pod_ready.go:97] node "test-preload-392702" hosting pod "coredns-6d4b75cb6d-lsnwr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392702" has status "Ready":"False"
	I0617 11:43:54.117328  152958 pod_ready.go:81] duration metric: took 7.303238ms for pod "coredns-6d4b75cb6d-lsnwr" in "kube-system" namespace to be "Ready" ...
	E0617 11:43:54.117337  152958 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-392702" hosting pod "coredns-6d4b75cb6d-lsnwr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392702" has status "Ready":"False"
	I0617 11:43:54.117347  152958 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-392702" in "kube-system" namespace to be "Ready" ...
	I0617 11:43:54.135038  152958 pod_ready.go:97] node "test-preload-392702" hosting pod "etcd-test-preload-392702" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392702" has status "Ready":"False"
	I0617 11:43:54.135061  152958 pod_ready.go:81] duration metric: took 17.703908ms for pod "etcd-test-preload-392702" in "kube-system" namespace to be "Ready" ...
	E0617 11:43:54.135070  152958 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-392702" hosting pod "etcd-test-preload-392702" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392702" has status "Ready":"False"
	I0617 11:43:54.135076  152958 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-392702" in "kube-system" namespace to be "Ready" ...
	I0617 11:43:54.230973  152958 pod_ready.go:97] node "test-preload-392702" hosting pod "kube-apiserver-test-preload-392702" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392702" has status "Ready":"False"
	I0617 11:43:54.231001  152958 pod_ready.go:81] duration metric: took 95.916284ms for pod "kube-apiserver-test-preload-392702" in "kube-system" namespace to be "Ready" ...
	E0617 11:43:54.231011  152958 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-392702" hosting pod "kube-apiserver-test-preload-392702" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392702" has status "Ready":"False"
	I0617 11:43:54.231017  152958 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-392702" in "kube-system" namespace to be "Ready" ...
	I0617 11:43:54.627374  152958 pod_ready.go:97] node "test-preload-392702" hosting pod "kube-controller-manager-test-preload-392702" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392702" has status "Ready":"False"
	I0617 11:43:54.627403  152958 pod_ready.go:81] duration metric: took 396.376193ms for pod "kube-controller-manager-test-preload-392702" in "kube-system" namespace to be "Ready" ...
	E0617 11:43:54.627413  152958 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-392702" hosting pod "kube-controller-manager-test-preload-392702" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392702" has status "Ready":"False"
	I0617 11:43:54.627419  152958 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gtw27" in "kube-system" namespace to be "Ready" ...
	I0617 11:43:55.028875  152958 pod_ready.go:97] node "test-preload-392702" hosting pod "kube-proxy-gtw27" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392702" has status "Ready":"False"
	I0617 11:43:55.028904  152958 pod_ready.go:81] duration metric: took 401.476733ms for pod "kube-proxy-gtw27" in "kube-system" namespace to be "Ready" ...
	E0617 11:43:55.028913  152958 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-392702" hosting pod "kube-proxy-gtw27" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392702" has status "Ready":"False"
	I0617 11:43:55.028919  152958 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-392702" in "kube-system" namespace to be "Ready" ...
	I0617 11:43:55.428962  152958 pod_ready.go:97] node "test-preload-392702" hosting pod "kube-scheduler-test-preload-392702" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392702" has status "Ready":"False"
	I0617 11:43:55.428989  152958 pod_ready.go:81] duration metric: took 400.064521ms for pod "kube-scheduler-test-preload-392702" in "kube-system" namespace to be "Ready" ...
	E0617 11:43:55.428998  152958 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-392702" hosting pod "kube-scheduler-test-preload-392702" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392702" has status "Ready":"False"
	I0617 11:43:55.429005  152958 pod_ready.go:38] duration metric: took 1.34252486s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 11:43:55.429023  152958 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 11:43:55.449806  152958 ops.go:34] apiserver oom_adj: -16
	I0617 11:43:55.449828  152958 kubeadm.go:591] duration metric: took 9.956133504s to restartPrimaryControlPlane
	I0617 11:43:55.449847  152958 kubeadm.go:393] duration metric: took 10.005466918s to StartCluster
	I0617 11:43:55.449871  152958 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:43:55.449955  152958 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:43:55.450599  152958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:43:55.450853  152958 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:43:55.452494  152958 out.go:177] * Verifying Kubernetes components...
	I0617 11:43:55.450948  152958 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 11:43:55.451087  152958 config.go:182] Loaded profile config "test-preload-392702": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0617 11:43:55.453796  152958 addons.go:69] Setting storage-provisioner=true in profile "test-preload-392702"
	I0617 11:43:55.453809  152958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:43:55.453827  152958 addons.go:69] Setting default-storageclass=true in profile "test-preload-392702"
	I0617 11:43:55.453838  152958 addons.go:234] Setting addon storage-provisioner=true in "test-preload-392702"
	W0617 11:43:55.453852  152958 addons.go:243] addon storage-provisioner should already be in state true
	I0617 11:43:55.453857  152958 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-392702"
	I0617 11:43:55.453882  152958 host.go:66] Checking if "test-preload-392702" exists ...
	I0617 11:43:55.454206  152958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:43:55.454256  152958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:43:55.454274  152958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:43:55.454314  152958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:43:55.469572  152958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42473
	I0617 11:43:55.469599  152958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33499
	I0617 11:43:55.470007  152958 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:43:55.470025  152958 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:43:55.470467  152958 main.go:141] libmachine: Using API Version  1
	I0617 11:43:55.470485  152958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:43:55.470597  152958 main.go:141] libmachine: Using API Version  1
	I0617 11:43:55.470625  152958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:43:55.470809  152958 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:43:55.470954  152958 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:43:55.471000  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetState
	I0617 11:43:55.471548  152958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:43:55.471597  152958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:43:55.473375  152958 kapi.go:59] client config for test-preload-392702: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/profiles/test-preload-392702/client.crt", KeyFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/profiles/test-preload-392702/client.key", CAFile:"/home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfaf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0617 11:43:55.473651  152958 addons.go:234] Setting addon default-storageclass=true in "test-preload-392702"
	W0617 11:43:55.473667  152958 addons.go:243] addon default-storageclass should already be in state true
	I0617 11:43:55.473697  152958 host.go:66] Checking if "test-preload-392702" exists ...
	I0617 11:43:55.474059  152958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:43:55.474098  152958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:43:55.486052  152958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40311
	I0617 11:43:55.486449  152958 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:43:55.486929  152958 main.go:141] libmachine: Using API Version  1
	I0617 11:43:55.486952  152958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:43:55.487299  152958 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:43:55.487540  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetState
	I0617 11:43:55.488421  152958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43809
	I0617 11:43:55.488823  152958 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:43:55.489289  152958 main.go:141] libmachine: Using API Version  1
	I0617 11:43:55.489307  152958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:43:55.489341  152958 main.go:141] libmachine: (test-preload-392702) Calling .DriverName
	I0617 11:43:55.491519  152958 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 11:43:55.489620  152958 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:43:55.492975  152958 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 11:43:55.492993  152958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 11:43:55.493019  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHHostname
	I0617 11:43:55.493380  152958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:43:55.493423  152958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:43:55.495702  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:55.496160  152958 main.go:141] libmachine: (test-preload-392702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:ab:36", ip: ""} in network mk-test-preload-392702: {Iface:virbr1 ExpiryTime:2024-06-17 12:41:36 +0000 UTC Type:0 Mac:52:54:00:ba:ab:36 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-392702 Clientid:01:52:54:00:ba:ab:36}
	I0617 11:43:55.496184  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined IP address 192.168.39.217 and MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:55.496331  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHPort
	I0617 11:43:55.496507  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHKeyPath
	I0617 11:43:55.496655  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHUsername
	I0617 11:43:55.496800  152958 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/test-preload-392702/id_rsa Username:docker}
	I0617 11:43:55.509047  152958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44817
	I0617 11:43:55.509393  152958 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:43:55.509851  152958 main.go:141] libmachine: Using API Version  1
	I0617 11:43:55.509873  152958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:43:55.510164  152958 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:43:55.510322  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetState
	I0617 11:43:55.511555  152958 main.go:141] libmachine: (test-preload-392702) Calling .DriverName
	I0617 11:43:55.511784  152958 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 11:43:55.511802  152958 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 11:43:55.511818  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHHostname
	I0617 11:43:55.514128  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:55.514498  152958 main.go:141] libmachine: (test-preload-392702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:ab:36", ip: ""} in network mk-test-preload-392702: {Iface:virbr1 ExpiryTime:2024-06-17 12:41:36 +0000 UTC Type:0 Mac:52:54:00:ba:ab:36 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-392702 Clientid:01:52:54:00:ba:ab:36}
	I0617 11:43:55.514526  152958 main.go:141] libmachine: (test-preload-392702) DBG | domain test-preload-392702 has defined IP address 192.168.39.217 and MAC address 52:54:00:ba:ab:36 in network mk-test-preload-392702
	I0617 11:43:55.514720  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHPort
	I0617 11:43:55.514917  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHKeyPath
	I0617 11:43:55.515075  152958 main.go:141] libmachine: (test-preload-392702) Calling .GetSSHUsername
	I0617 11:43:55.515219  152958 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/test-preload-392702/id_rsa Username:docker}
	I0617 11:43:55.646352  152958 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:43:55.666464  152958 node_ready.go:35] waiting up to 6m0s for node "test-preload-392702" to be "Ready" ...
	I0617 11:43:55.730277  152958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 11:43:55.732181  152958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 11:43:56.592680  152958 main.go:141] libmachine: Making call to close driver server
	I0617 11:43:56.592711  152958 main.go:141] libmachine: (test-preload-392702) Calling .Close
	I0617 11:43:56.593060  152958 main.go:141] libmachine: Successfully made call to close driver server
	I0617 11:43:56.593096  152958 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 11:43:56.593111  152958 main.go:141] libmachine: Making call to close driver server
	I0617 11:43:56.593122  152958 main.go:141] libmachine: (test-preload-392702) Calling .Close
	I0617 11:43:56.593413  152958 main.go:141] libmachine: Successfully made call to close driver server
	I0617 11:43:56.593434  152958 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 11:43:56.593439  152958 main.go:141] libmachine: (test-preload-392702) DBG | Closing plugin on server side
	I0617 11:43:56.604301  152958 main.go:141] libmachine: Making call to close driver server
	I0617 11:43:56.604323  152958 main.go:141] libmachine: (test-preload-392702) Calling .Close
	I0617 11:43:56.604583  152958 main.go:141] libmachine: Successfully made call to close driver server
	I0617 11:43:56.604597  152958 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 11:43:56.604597  152958 main.go:141] libmachine: (test-preload-392702) DBG | Closing plugin on server side
	I0617 11:43:56.638262  152958 main.go:141] libmachine: Making call to close driver server
	I0617 11:43:56.638283  152958 main.go:141] libmachine: (test-preload-392702) Calling .Close
	I0617 11:43:56.638585  152958 main.go:141] libmachine: Successfully made call to close driver server
	I0617 11:43:56.638626  152958 main.go:141] libmachine: (test-preload-392702) DBG | Closing plugin on server side
	I0617 11:43:56.638648  152958 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 11:43:56.638660  152958 main.go:141] libmachine: Making call to close driver server
	I0617 11:43:56.638666  152958 main.go:141] libmachine: (test-preload-392702) Calling .Close
	I0617 11:43:56.638904  152958 main.go:141] libmachine: Successfully made call to close driver server
	I0617 11:43:56.638918  152958 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 11:43:56.638944  152958 main.go:141] libmachine: (test-preload-392702) DBG | Closing plugin on server side
	I0617 11:43:56.641511  152958 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0617 11:43:56.642633  152958 addons.go:510] duration metric: took 1.191700446s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0617 11:43:57.673720  152958 node_ready.go:53] node "test-preload-392702" has status "Ready":"False"
	I0617 11:44:00.171527  152958 node_ready.go:53] node "test-preload-392702" has status "Ready":"False"
	I0617 11:44:02.670477  152958 node_ready.go:49] node "test-preload-392702" has status "Ready":"True"
	I0617 11:44:02.670504  152958 node_ready.go:38] duration metric: took 7.004006549s for node "test-preload-392702" to be "Ready" ...
	I0617 11:44:02.670518  152958 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 11:44:02.677318  152958 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-lsnwr" in "kube-system" namespace to be "Ready" ...
	I0617 11:44:02.683042  152958 pod_ready.go:92] pod "coredns-6d4b75cb6d-lsnwr" in "kube-system" namespace has status "Ready":"True"
	I0617 11:44:02.683065  152958 pod_ready.go:81] duration metric: took 5.718885ms for pod "coredns-6d4b75cb6d-lsnwr" in "kube-system" namespace to be "Ready" ...
	I0617 11:44:02.683077  152958 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-392702" in "kube-system" namespace to be "Ready" ...
	I0617 11:44:02.688402  152958 pod_ready.go:92] pod "etcd-test-preload-392702" in "kube-system" namespace has status "Ready":"True"
	I0617 11:44:02.688424  152958 pod_ready.go:81] duration metric: took 5.338936ms for pod "etcd-test-preload-392702" in "kube-system" namespace to be "Ready" ...
	I0617 11:44:02.688434  152958 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-392702" in "kube-system" namespace to be "Ready" ...
	I0617 11:44:02.692634  152958 pod_ready.go:92] pod "kube-apiserver-test-preload-392702" in "kube-system" namespace has status "Ready":"True"
	I0617 11:44:02.692655  152958 pod_ready.go:81] duration metric: took 4.213514ms for pod "kube-apiserver-test-preload-392702" in "kube-system" namespace to be "Ready" ...
	I0617 11:44:02.692666  152958 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-392702" in "kube-system" namespace to be "Ready" ...
	I0617 11:44:02.697003  152958 pod_ready.go:92] pod "kube-controller-manager-test-preload-392702" in "kube-system" namespace has status "Ready":"True"
	I0617 11:44:02.697022  152958 pod_ready.go:81] duration metric: took 4.349378ms for pod "kube-controller-manager-test-preload-392702" in "kube-system" namespace to be "Ready" ...
	I0617 11:44:02.697030  152958 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gtw27" in "kube-system" namespace to be "Ready" ...
	I0617 11:44:03.071124  152958 pod_ready.go:92] pod "kube-proxy-gtw27" in "kube-system" namespace has status "Ready":"True"
	I0617 11:44:03.071148  152958 pod_ready.go:81] duration metric: took 374.111333ms for pod "kube-proxy-gtw27" in "kube-system" namespace to be "Ready" ...
	I0617 11:44:03.071157  152958 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-392702" in "kube-system" namespace to be "Ready" ...
	I0617 11:44:05.076917  152958 pod_ready.go:102] pod "kube-scheduler-test-preload-392702" in "kube-system" namespace has status "Ready":"False"
	I0617 11:44:06.577974  152958 pod_ready.go:92] pod "kube-scheduler-test-preload-392702" in "kube-system" namespace has status "Ready":"True"
	I0617 11:44:06.578001  152958 pod_ready.go:81] duration metric: took 3.506836439s for pod "kube-scheduler-test-preload-392702" in "kube-system" namespace to be "Ready" ...
	I0617 11:44:06.578015  152958 pod_ready.go:38] duration metric: took 3.907485997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 11:44:06.578033  152958 api_server.go:52] waiting for apiserver process to appear ...
	I0617 11:44:06.578092  152958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:44:06.594358  152958 api_server.go:72] duration metric: took 11.143468289s to wait for apiserver process to appear ...
	I0617 11:44:06.594388  152958 api_server.go:88] waiting for apiserver healthz status ...
	I0617 11:44:06.594413  152958 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0617 11:44:06.601040  152958 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0617 11:44:06.601937  152958 api_server.go:141] control plane version: v1.24.4
	I0617 11:44:06.601962  152958 api_server.go:131] duration metric: took 7.567027ms to wait for apiserver health ...
	I0617 11:44:06.601970  152958 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 11:44:06.607397  152958 system_pods.go:59] 7 kube-system pods found
	I0617 11:44:06.607427  152958 system_pods.go:61] "coredns-6d4b75cb6d-lsnwr" [8f33b34a-5d01-427b-bafb-186fd7b858df] Running
	I0617 11:44:06.607433  152958 system_pods.go:61] "etcd-test-preload-392702" [885c35fd-d93e-4399-b479-8fb09f6871ee] Running
	I0617 11:44:06.607437  152958 system_pods.go:61] "kube-apiserver-test-preload-392702" [12f2ab42-8a27-46eb-a6a1-465d4f6b46a5] Running
	I0617 11:44:06.607440  152958 system_pods.go:61] "kube-controller-manager-test-preload-392702" [3ac6949f-c518-4778-b92a-ab0dea4d33c7] Running
	I0617 11:44:06.607444  152958 system_pods.go:61] "kube-proxy-gtw27" [a5335459-09fc-4ca8-baf5-1176a373b395] Running
	I0617 11:44:06.607481  152958 system_pods.go:61] "kube-scheduler-test-preload-392702" [8e423d28-6bda-40df-9c09-c7d15b1b7218] Running
	I0617 11:44:06.607491  152958 system_pods.go:61] "storage-provisioner" [6d9dbe95-91d8-44c6-9616-10f9b4b3e9c0] Running
	I0617 11:44:06.607498  152958 system_pods.go:74] duration metric: took 5.521733ms to wait for pod list to return data ...
	I0617 11:44:06.607506  152958 default_sa.go:34] waiting for default service account to be created ...
	I0617 11:44:06.670461  152958 default_sa.go:45] found service account: "default"
	I0617 11:44:06.670488  152958 default_sa.go:55] duration metric: took 62.97161ms for default service account to be created ...
	I0617 11:44:06.670496  152958 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 11:44:06.872924  152958 system_pods.go:86] 7 kube-system pods found
	I0617 11:44:06.872952  152958 system_pods.go:89] "coredns-6d4b75cb6d-lsnwr" [8f33b34a-5d01-427b-bafb-186fd7b858df] Running
	I0617 11:44:06.872960  152958 system_pods.go:89] "etcd-test-preload-392702" [885c35fd-d93e-4399-b479-8fb09f6871ee] Running
	I0617 11:44:06.872964  152958 system_pods.go:89] "kube-apiserver-test-preload-392702" [12f2ab42-8a27-46eb-a6a1-465d4f6b46a5] Running
	I0617 11:44:06.872968  152958 system_pods.go:89] "kube-controller-manager-test-preload-392702" [3ac6949f-c518-4778-b92a-ab0dea4d33c7] Running
	I0617 11:44:06.872972  152958 system_pods.go:89] "kube-proxy-gtw27" [a5335459-09fc-4ca8-baf5-1176a373b395] Running
	I0617 11:44:06.872976  152958 system_pods.go:89] "kube-scheduler-test-preload-392702" [8e423d28-6bda-40df-9c09-c7d15b1b7218] Running
	I0617 11:44:06.872979  152958 system_pods.go:89] "storage-provisioner" [6d9dbe95-91d8-44c6-9616-10f9b4b3e9c0] Running
	I0617 11:44:06.872986  152958 system_pods.go:126] duration metric: took 202.485079ms to wait for k8s-apps to be running ...
	I0617 11:44:06.872992  152958 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 11:44:06.873045  152958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:44:06.886927  152958 system_svc.go:56] duration metric: took 13.923908ms WaitForService to wait for kubelet
	I0617 11:44:06.886956  152958 kubeadm.go:576] duration metric: took 11.436071962s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:44:06.886980  152958 node_conditions.go:102] verifying NodePressure condition ...
	I0617 11:44:07.071114  152958 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 11:44:07.071143  152958 node_conditions.go:123] node cpu capacity is 2
	I0617 11:44:07.071156  152958 node_conditions.go:105] duration metric: took 184.170042ms to run NodePressure ...
	I0617 11:44:07.071167  152958 start.go:240] waiting for startup goroutines ...
	I0617 11:44:07.071174  152958 start.go:245] waiting for cluster config update ...
	I0617 11:44:07.071184  152958 start.go:254] writing updated cluster config ...
	I0617 11:44:07.071491  152958 ssh_runner.go:195] Run: rm -f paused
	I0617 11:44:07.118628  152958 start.go:600] kubectl: 1.30.2, cluster: 1.24.4 (minor skew: 6)
	I0617 11:44:07.120557  152958 out.go:177] 
	W0617 11:44:07.121893  152958 out.go:239] ! /usr/local/bin/kubectl is version 1.30.2, which may have incompatibilities with Kubernetes 1.24.4.
	I0617 11:44:07.123193  152958 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0617 11:44:07.124422  152958 out.go:177] * Done! kubectl is now configured to use "test-preload-392702" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 17 11:44:07 test-preload-392702 crio[705]: time="2024-06-17 11:44:07.999477926Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624647999452465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aee8854e-459c-4122-a8b3-4619e9338a12 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.000057789Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d52a3774-5094-4f47-9f2f-be94bc9ebbb2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.000132100Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d52a3774-5094-4f47-9f2f-be94bc9ebbb2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.000474798Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef6214ac8ecf806c9f62ac17b8ab13c89d804ab3d5b39529b8135e93919ae988,PodSandboxId:432e4dcfe50ee558c857e3a7dcefc4caa0c00239d4dc4f8aef4b6809e2cfb6bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1718624641037122775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-lsnwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f33b34a-5d01-427b-bafb-186fd7b858df,},Annotations:map[string]string{io.kubernetes.container.hash: 53d2c043,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a5d0c65b22c1ee4e51250258ff2ff26b5bf71e60e5ff9b6492c4444a1f63d40,PodSandboxId:004d8b75dc8e1455e71ab61fb03890bf27f583863b7b0b1d528f252ac7094ab1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718624634845868412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 6d9dbe95-91d8-44c6-9616-10f9b4b3e9c0,},Annotations:map[string]string{io.kubernetes.container.hash: 3da99a12,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672d627734199d93b5edcc5ca78ab31154738527dcd7b2a5bff08ef68c20cbd6,PodSandboxId:004d8b75dc8e1455e71ab61fb03890bf27f583863b7b0b1d528f252ac7094ab1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718624634082260615,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 6d9dbe95-91d8-44c6-9616-10f9b4b3e9c0,},Annotations:map[string]string{io.kubernetes.container.hash: 3da99a12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5cc03a2142f4c39ca295181b49fa4b1ba4e70f29482413728b604ddd89d0ee7,PodSandboxId:3e61e28a6ddfe9edf9a11593c335ba2a39a54a23c6be5b68d5d000e98ba815ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1718624633685285894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gtw27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5335459-09fc-4
ca8-baf5-1176a373b395,},Annotations:map[string]string{io.kubernetes.container.hash: 2b366c1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c503ae6b9e279b3a171c3fcd9012a3785ea91a36eb6aadeab1af306e96e4ccbf,PodSandboxId:21bcc1ef532594f4f3e5109925861a9bceb292150622f5c405e4076dced83357,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1718624627511688102,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-392702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f538564e336879bc24e917b4b9274822,},Annotations:map[s
tring]string{io.kubernetes.container.hash: bf293e22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61fa45701c43470f5e7fe6d5381511bfb9459f7b029858a4f27c564e3c079373,PodSandboxId:2738a0f9ae8c0d9030883ae4605ff9d22cbf6483e622399ea1c6ff4a8730fbc6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1718624627486384972,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-392702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4004981b62e5d72cb474a527031b6444,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 91a9570b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2aaeedb364ba59b40ca58da949d3ed16d4b5da5085b81b4f83e1afc79b8af8,PodSandboxId:0977dcd00227433da6d591e61675ecb7112e2120f1d0e767fe6f1aefd8e4ec76,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1718624627447491120,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-392702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ced92c9f5fd025f58eec59a007408490,},Annotations:map[string]string{io.kubern
etes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623bec91b6a5f84c397e58dd8ca9f4c99c659cd8175863d2a125382a15795ba3,PodSandboxId:89c96bbd445517da57f20035caa59e51bc4f6bbfe29e33f50b55765a9ee7a7e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1718624627391023383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-392702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4acead1c0caaf12730580db3e71d7107,},Annotations:map[string]
string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d52a3774-5094-4f47-9f2f-be94bc9ebbb2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.038592335Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=174f68b5-78bd-4f34-a439-f046ddf1820d name=/runtime.v1.RuntimeService/Version
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.038672861Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=174f68b5-78bd-4f34-a439-f046ddf1820d name=/runtime.v1.RuntimeService/Version
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.039881485Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd870254-de5a-475c-b3af-262c22b6d282 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.040308122Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624648040287756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd870254-de5a-475c-b3af-262c22b6d282 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.041014482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=978c2f00-c025-4e78-97a8-2fcfb0ec126e name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.041067139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=978c2f00-c025-4e78-97a8-2fcfb0ec126e name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.041249211Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef6214ac8ecf806c9f62ac17b8ab13c89d804ab3d5b39529b8135e93919ae988,PodSandboxId:432e4dcfe50ee558c857e3a7dcefc4caa0c00239d4dc4f8aef4b6809e2cfb6bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1718624641037122775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-lsnwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f33b34a-5d01-427b-bafb-186fd7b858df,},Annotations:map[string]string{io.kubernetes.container.hash: 53d2c043,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a5d0c65b22c1ee4e51250258ff2ff26b5bf71e60e5ff9b6492c4444a1f63d40,PodSandboxId:004d8b75dc8e1455e71ab61fb03890bf27f583863b7b0b1d528f252ac7094ab1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718624634845868412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 6d9dbe95-91d8-44c6-9616-10f9b4b3e9c0,},Annotations:map[string]string{io.kubernetes.container.hash: 3da99a12,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672d627734199d93b5edcc5ca78ab31154738527dcd7b2a5bff08ef68c20cbd6,PodSandboxId:004d8b75dc8e1455e71ab61fb03890bf27f583863b7b0b1d528f252ac7094ab1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718624634082260615,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 6d9dbe95-91d8-44c6-9616-10f9b4b3e9c0,},Annotations:map[string]string{io.kubernetes.container.hash: 3da99a12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5cc03a2142f4c39ca295181b49fa4b1ba4e70f29482413728b604ddd89d0ee7,PodSandboxId:3e61e28a6ddfe9edf9a11593c335ba2a39a54a23c6be5b68d5d000e98ba815ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1718624633685285894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gtw27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5335459-09fc-4
ca8-baf5-1176a373b395,},Annotations:map[string]string{io.kubernetes.container.hash: 2b366c1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c503ae6b9e279b3a171c3fcd9012a3785ea91a36eb6aadeab1af306e96e4ccbf,PodSandboxId:21bcc1ef532594f4f3e5109925861a9bceb292150622f5c405e4076dced83357,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1718624627511688102,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-392702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f538564e336879bc24e917b4b9274822,},Annotations:map[s
tring]string{io.kubernetes.container.hash: bf293e22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61fa45701c43470f5e7fe6d5381511bfb9459f7b029858a4f27c564e3c079373,PodSandboxId:2738a0f9ae8c0d9030883ae4605ff9d22cbf6483e622399ea1c6ff4a8730fbc6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1718624627486384972,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-392702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4004981b62e5d72cb474a527031b6444,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 91a9570b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2aaeedb364ba59b40ca58da949d3ed16d4b5da5085b81b4f83e1afc79b8af8,PodSandboxId:0977dcd00227433da6d591e61675ecb7112e2120f1d0e767fe6f1aefd8e4ec76,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1718624627447491120,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-392702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ced92c9f5fd025f58eec59a007408490,},Annotations:map[string]string{io.kubern
etes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623bec91b6a5f84c397e58dd8ca9f4c99c659cd8175863d2a125382a15795ba3,PodSandboxId:89c96bbd445517da57f20035caa59e51bc4f6bbfe29e33f50b55765a9ee7a7e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1718624627391023383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-392702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4acead1c0caaf12730580db3e71d7107,},Annotations:map[string]
string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=978c2f00-c025-4e78-97a8-2fcfb0ec126e name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.078576842Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e085e25-6aa7-402e-bc6a-2da3b798aa63 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.078647890Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e085e25-6aa7-402e-bc6a-2da3b798aa63 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.080146213Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d36c3ac-f909-4b93-8f86-8866d9bfc19d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.080670831Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624648080647113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d36c3ac-f909-4b93-8f86-8866d9bfc19d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.081153181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a8cd162-6f1b-4d6c-8ff4-c7b7bf15cdfa name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.081208690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a8cd162-6f1b-4d6c-8ff4-c7b7bf15cdfa name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.081403157Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef6214ac8ecf806c9f62ac17b8ab13c89d804ab3d5b39529b8135e93919ae988,PodSandboxId:432e4dcfe50ee558c857e3a7dcefc4caa0c00239d4dc4f8aef4b6809e2cfb6bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1718624641037122775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-lsnwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f33b34a-5d01-427b-bafb-186fd7b858df,},Annotations:map[string]string{io.kubernetes.container.hash: 53d2c043,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a5d0c65b22c1ee4e51250258ff2ff26b5bf71e60e5ff9b6492c4444a1f63d40,PodSandboxId:004d8b75dc8e1455e71ab61fb03890bf27f583863b7b0b1d528f252ac7094ab1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718624634845868412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 6d9dbe95-91d8-44c6-9616-10f9b4b3e9c0,},Annotations:map[string]string{io.kubernetes.container.hash: 3da99a12,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672d627734199d93b5edcc5ca78ab31154738527dcd7b2a5bff08ef68c20cbd6,PodSandboxId:004d8b75dc8e1455e71ab61fb03890bf27f583863b7b0b1d528f252ac7094ab1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718624634082260615,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 6d9dbe95-91d8-44c6-9616-10f9b4b3e9c0,},Annotations:map[string]string{io.kubernetes.container.hash: 3da99a12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5cc03a2142f4c39ca295181b49fa4b1ba4e70f29482413728b604ddd89d0ee7,PodSandboxId:3e61e28a6ddfe9edf9a11593c335ba2a39a54a23c6be5b68d5d000e98ba815ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1718624633685285894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gtw27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5335459-09fc-4
ca8-baf5-1176a373b395,},Annotations:map[string]string{io.kubernetes.container.hash: 2b366c1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c503ae6b9e279b3a171c3fcd9012a3785ea91a36eb6aadeab1af306e96e4ccbf,PodSandboxId:21bcc1ef532594f4f3e5109925861a9bceb292150622f5c405e4076dced83357,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1718624627511688102,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-392702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f538564e336879bc24e917b4b9274822,},Annotations:map[s
tring]string{io.kubernetes.container.hash: bf293e22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61fa45701c43470f5e7fe6d5381511bfb9459f7b029858a4f27c564e3c079373,PodSandboxId:2738a0f9ae8c0d9030883ae4605ff9d22cbf6483e622399ea1c6ff4a8730fbc6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1718624627486384972,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-392702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4004981b62e5d72cb474a527031b6444,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 91a9570b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2aaeedb364ba59b40ca58da949d3ed16d4b5da5085b81b4f83e1afc79b8af8,PodSandboxId:0977dcd00227433da6d591e61675ecb7112e2120f1d0e767fe6f1aefd8e4ec76,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1718624627447491120,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-392702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ced92c9f5fd025f58eec59a007408490,},Annotations:map[string]string{io.kubern
etes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623bec91b6a5f84c397e58dd8ca9f4c99c659cd8175863d2a125382a15795ba3,PodSandboxId:89c96bbd445517da57f20035caa59e51bc4f6bbfe29e33f50b55765a9ee7a7e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1718624627391023383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-392702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4acead1c0caaf12730580db3e71d7107,},Annotations:map[string]
string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a8cd162-6f1b-4d6c-8ff4-c7b7bf15cdfa name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.116203487Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3038762-7d2a-4908-86df-f478d0d46062 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.116297384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3038762-7d2a-4908-86df-f478d0d46062 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.117219032Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0916f2d1-97fc-4840-93d1-53964c45e9dc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.117831111Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624648117807030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0916f2d1-97fc-4840-93d1-53964c45e9dc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.118267325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee6bf581-7a5e-471d-a8e0-71d4a6f3dde2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.118317258Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee6bf581-7a5e-471d-a8e0-71d4a6f3dde2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:44:08 test-preload-392702 crio[705]: time="2024-06-17 11:44:08.118476656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef6214ac8ecf806c9f62ac17b8ab13c89d804ab3d5b39529b8135e93919ae988,PodSandboxId:432e4dcfe50ee558c857e3a7dcefc4caa0c00239d4dc4f8aef4b6809e2cfb6bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1718624641037122775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-lsnwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f33b34a-5d01-427b-bafb-186fd7b858df,},Annotations:map[string]string{io.kubernetes.container.hash: 53d2c043,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a5d0c65b22c1ee4e51250258ff2ff26b5bf71e60e5ff9b6492c4444a1f63d40,PodSandboxId:004d8b75dc8e1455e71ab61fb03890bf27f583863b7b0b1d528f252ac7094ab1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718624634845868412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 6d9dbe95-91d8-44c6-9616-10f9b4b3e9c0,},Annotations:map[string]string{io.kubernetes.container.hash: 3da99a12,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672d627734199d93b5edcc5ca78ab31154738527dcd7b2a5bff08ef68c20cbd6,PodSandboxId:004d8b75dc8e1455e71ab61fb03890bf27f583863b7b0b1d528f252ac7094ab1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718624634082260615,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 6d9dbe95-91d8-44c6-9616-10f9b4b3e9c0,},Annotations:map[string]string{io.kubernetes.container.hash: 3da99a12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5cc03a2142f4c39ca295181b49fa4b1ba4e70f29482413728b604ddd89d0ee7,PodSandboxId:3e61e28a6ddfe9edf9a11593c335ba2a39a54a23c6be5b68d5d000e98ba815ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1718624633685285894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gtw27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5335459-09fc-4
ca8-baf5-1176a373b395,},Annotations:map[string]string{io.kubernetes.container.hash: 2b366c1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c503ae6b9e279b3a171c3fcd9012a3785ea91a36eb6aadeab1af306e96e4ccbf,PodSandboxId:21bcc1ef532594f4f3e5109925861a9bceb292150622f5c405e4076dced83357,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1718624627511688102,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-392702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f538564e336879bc24e917b4b9274822,},Annotations:map[s
tring]string{io.kubernetes.container.hash: bf293e22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61fa45701c43470f5e7fe6d5381511bfb9459f7b029858a4f27c564e3c079373,PodSandboxId:2738a0f9ae8c0d9030883ae4605ff9d22cbf6483e622399ea1c6ff4a8730fbc6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1718624627486384972,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-392702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4004981b62e5d72cb474a527031b6444,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 91a9570b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2aaeedb364ba59b40ca58da949d3ed16d4b5da5085b81b4f83e1afc79b8af8,PodSandboxId:0977dcd00227433da6d591e61675ecb7112e2120f1d0e767fe6f1aefd8e4ec76,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1718624627447491120,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-392702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ced92c9f5fd025f58eec59a007408490,},Annotations:map[string]string{io.kubern
etes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623bec91b6a5f84c397e58dd8ca9f4c99c659cd8175863d2a125382a15795ba3,PodSandboxId:89c96bbd445517da57f20035caa59e51bc4f6bbfe29e33f50b55765a9ee7a7e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1718624627391023383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-392702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4acead1c0caaf12730580db3e71d7107,},Annotations:map[string]
string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee6bf581-7a5e-471d-a8e0-71d4a6f3dde2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ef6214ac8ecf8       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   432e4dcfe50ee       coredns-6d4b75cb6d-lsnwr
	6a5d0c65b22c1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       2                   004d8b75dc8e1       storage-provisioner
	672d627734199       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Exited              storage-provisioner       1                   004d8b75dc8e1       storage-provisioner
	e5cc03a2142f4       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   3e61e28a6ddfe       kube-proxy-gtw27
	c503ae6b9e279       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   21bcc1ef53259       etcd-test-preload-392702
	61fa45701c434       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   2738a0f9ae8c0       kube-apiserver-test-preload-392702
	ca2aaeedb364b       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   0977dcd002274       kube-scheduler-test-preload-392702
	623bec91b6a5f       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   89c96bbd44551       kube-controller-manager-test-preload-392702
	
	
	==> coredns [ef6214ac8ecf806c9f62ac17b8ab13c89d804ab3d5b39529b8135e93919ae988] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:60401 - 47656 "HINFO IN 3042054076335401424.266216633246806782. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010752055s
	
	
	==> describe nodes <==
	Name:               test-preload-392702
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-392702
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=test-preload-392702
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T11_42_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:42:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-392702
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:44:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:44:02 +0000   Mon, 17 Jun 2024 11:42:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:44:02 +0000   Mon, 17 Jun 2024 11:42:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:44:02 +0000   Mon, 17 Jun 2024 11:42:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:44:02 +0000   Mon, 17 Jun 2024 11:44:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    test-preload-392702
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7a44e39d7d6044e68ad3f12c037ca0b1
	  System UUID:                7a44e39d-7d60-44e6-8ad3-f12c037ca0b1
	  Boot ID:                    578e91ba-8cab-4696-86e9-9964749e14ca
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-lsnwr                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     74s
	  kube-system                 etcd-test-preload-392702                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         87s
	  kube-system                 kube-apiserver-test-preload-392702             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-controller-manager-test-preload-392702    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-proxy-gtw27                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-scheduler-test-preload-392702             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 72s                kube-proxy       
	  Normal  Starting                 87s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  87s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  87s                kubelet          Node test-preload-392702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s                kubelet          Node test-preload-392702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s                kubelet          Node test-preload-392702 status is now: NodeHasSufficientPID
	  Normal  NodeReady                77s                kubelet          Node test-preload-392702 status is now: NodeReady
	  Normal  RegisteredNode           74s                node-controller  Node test-preload-392702 event: Registered Node test-preload-392702 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-392702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-392702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-392702 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node test-preload-392702 event: Registered Node test-preload-392702 in Controller
	
	
	==> dmesg <==
	[Jun17 11:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052423] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039695] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.514045] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.437992] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.603710] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.509319] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.069779] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059664] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.169009] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.131769] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.249193] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[ +13.132592] systemd-fstab-generator[968]: Ignoring "noauto" option for root device
	[  +0.061140] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.559490] systemd-fstab-generator[1098]: Ignoring "noauto" option for root device
	[  +6.311357] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.764449] systemd-fstab-generator[1761]: Ignoring "noauto" option for root device
	[  +5.286431] kauditd_printk_skb: 58 callbacks suppressed
	
	
	==> etcd [c503ae6b9e279b3a171c3fcd9012a3785ea91a36eb6aadeab1af306e96e4ccbf] <==
	{"level":"info","ts":"2024-06-17T11:43:47.993Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"a09c9983ac28f1fd","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-06-17T11:43:47.994Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-06-17T11:43:47.998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd switched to configuration voters=(11573293933243462141)"}
	{"level":"info","ts":"2024-06-17T11:43:47.998Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8f39477865362797","local-member-id":"a09c9983ac28f1fd","added-peer-id":"a09c9983ac28f1fd","added-peer-peer-urls":["https://192.168.39.217:2380"]}
	{"level":"info","ts":"2024-06-17T11:43:47.998Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8f39477865362797","local-member-id":"a09c9983ac28f1fd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:43:47.998Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:43:48.010Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-17T11:43:48.010Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a09c9983ac28f1fd","initial-advertise-peer-urls":["https://192.168.39.217:2380"],"listen-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.217:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-17T11:43:48.010Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-17T11:43:48.010Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-06-17T11:43:48.010Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-06-17T11:43:49.736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-17T11:43:49.736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-17T11:43:49.736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgPreVoteResp from a09c9983ac28f1fd at term 2"}
	{"level":"info","ts":"2024-06-17T11:43:49.736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became candidate at term 3"}
	{"level":"info","ts":"2024-06-17T11:43:49.736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgVoteResp from a09c9983ac28f1fd at term 3"}
	{"level":"info","ts":"2024-06-17T11:43:49.736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became leader at term 3"}
	{"level":"info","ts":"2024-06-17T11:43:49.736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a09c9983ac28f1fd elected leader a09c9983ac28f1fd at term 3"}
	{"level":"info","ts":"2024-06-17T11:43:49.738Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"a09c9983ac28f1fd","local-member-attributes":"{Name:test-preload-392702 ClientURLs:[https://192.168.39.217:2379]}","request-path":"/0/members/a09c9983ac28f1fd/attributes","cluster-id":"8f39477865362797","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-17T11:43:49.739Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T11:43:49.739Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T11:43:49.741Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-17T11:43:49.741Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.217:2379"}
	{"level":"info","ts":"2024-06-17T11:43:49.741Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-17T11:43:49.742Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:44:08 up 0 min,  0 users,  load average: 0.40, 0.12, 0.04
	Linux test-preload-392702 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [61fa45701c43470f5e7fe6d5381511bfb9459f7b029858a4f27c564e3c079373] <==
	I0617 11:43:52.132032       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0617 11:43:52.138499       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0617 11:43:52.138588       1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
	I0617 11:43:52.147437       1 apf_controller.go:317] Starting API Priority and Fairness config controller
	I0617 11:43:52.192309       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0617 11:43:52.204498       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0617 11:43:52.258833       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0617 11:43:52.329668       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0617 11:43:52.329690       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0617 11:43:52.330363       1 cache.go:39] Caches are synced for autoregister controller
	I0617 11:43:52.332419       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0617 11:43:52.335152       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0617 11:43:52.338652       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0617 11:43:52.348080       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0617 11:43:52.352824       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0617 11:43:52.828119       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0617 11:43:53.135488       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0617 11:43:53.960046       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0617 11:43:53.969482       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0617 11:43:54.013474       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0617 11:43:54.045033       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0617 11:43:54.055821       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0617 11:43:54.146350       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0617 11:44:05.465604       1 controller.go:611] quota admission added evaluator for: endpoints
	I0617 11:44:05.514169       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [623bec91b6a5f84c397e58dd8ca9f4c99c659cd8175863d2a125382a15795ba3] <==
	I0617 11:44:05.376958       1 shared_informer.go:262] Caches are synced for persistent volume
	I0617 11:44:05.379338       1 shared_informer.go:262] Caches are synced for expand
	I0617 11:44:05.380541       1 shared_informer.go:262] Caches are synced for crt configmap
	I0617 11:44:05.397171       1 shared_informer.go:262] Caches are synced for HPA
	I0617 11:44:05.399751       1 shared_informer.go:262] Caches are synced for ephemeral
	I0617 11:44:05.406354       1 shared_informer.go:262] Caches are synced for PV protection
	I0617 11:44:05.406385       1 shared_informer.go:262] Caches are synced for job
	I0617 11:44:05.406421       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0617 11:44:05.406423       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0617 11:44:05.408075       1 shared_informer.go:262] Caches are synced for PVC protection
	I0617 11:44:05.418275       1 shared_informer.go:262] Caches are synced for daemon sets
	I0617 11:44:05.430345       1 shared_informer.go:262] Caches are synced for stateful set
	I0617 11:44:05.456204       1 shared_informer.go:262] Caches are synced for endpoint
	I0617 11:44:05.456227       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0617 11:44:05.504712       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0617 11:44:05.556160       1 shared_informer.go:262] Caches are synced for attach detach
	I0617 11:44:05.606324       1 shared_informer.go:262] Caches are synced for resource quota
	I0617 11:44:05.611846       1 shared_informer.go:262] Caches are synced for resource quota
	I0617 11:44:05.619919       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0617 11:44:05.625480       1 shared_informer.go:262] Caches are synced for disruption
	I0617 11:44:05.625565       1 disruption.go:371] Sending events to api server.
	I0617 11:44:05.650443       1 shared_informer.go:262] Caches are synced for deployment
	I0617 11:44:06.056497       1 shared_informer.go:262] Caches are synced for garbage collector
	I0617 11:44:06.081144       1 shared_informer.go:262] Caches are synced for garbage collector
	I0617 11:44:06.081159       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [e5cc03a2142f4c39ca295181b49fa4b1ba4e70f29482413728b604ddd89d0ee7] <==
	I0617 11:43:54.031616       1 node.go:163] Successfully retrieved node IP: 192.168.39.217
	I0617 11:43:54.032010       1 server_others.go:138] "Detected node IP" address="192.168.39.217"
	I0617 11:43:54.032149       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0617 11:43:54.130234       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0617 11:43:54.130305       1 server_others.go:206] "Using iptables Proxier"
	I0617 11:43:54.130355       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0617 11:43:54.130789       1 server.go:661] "Version info" version="v1.24.4"
	I0617 11:43:54.132158       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:43:54.136480       1 config.go:444] "Starting node config controller"
	I0617 11:43:54.136562       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0617 11:43:54.139062       1 config.go:317] "Starting service config controller"
	I0617 11:43:54.139134       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0617 11:43:54.139231       1 config.go:226] "Starting endpoint slice config controller"
	I0617 11:43:54.139276       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0617 11:43:54.237214       1 shared_informer.go:262] Caches are synced for node config
	I0617 11:43:54.239691       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0617 11:43:54.239745       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [ca2aaeedb364ba59b40ca58da949d3ed16d4b5da5085b81b4f83e1afc79b8af8] <==
	I0617 11:43:48.459152       1 serving.go:348] Generated self-signed cert in-memory
	W0617 11:43:52.225627       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0617 11:43:52.225742       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 11:43:52.225771       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0617 11:43:52.225835       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0617 11:43:52.283864       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0617 11:43:52.283940       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:43:52.290036       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0617 11:43:52.290249       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0617 11:43:52.290303       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0617 11:43:52.290363       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0617 11:43:52.390719       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 17 11:43:52 test-preload-392702 kubelet[1105]: I0617 11:43:52.728144    1105 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a5335459-09fc-4ca8-baf5-1176a373b395-kube-proxy\") pod \"kube-proxy-gtw27\" (UID: \"a5335459-09fc-4ca8-baf5-1176a373b395\") " pod="kube-system/kube-proxy-gtw27"
	Jun 17 11:43:52 test-preload-392702 kubelet[1105]: I0617 11:43:52.728169    1105 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5w8r\" (UniqueName: \"kubernetes.io/projected/a5335459-09fc-4ca8-baf5-1176a373b395-kube-api-access-n5w8r\") pod \"kube-proxy-gtw27\" (UID: \"a5335459-09fc-4ca8-baf5-1176a373b395\") " pod="kube-system/kube-proxy-gtw27"
	Jun 17 11:43:52 test-preload-392702 kubelet[1105]: I0617 11:43:52.728188    1105 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6d9dbe95-91d8-44c6-9616-10f9b4b3e9c0-tmp\") pod \"storage-provisioner\" (UID: \"6d9dbe95-91d8-44c6-9616-10f9b4b3e9c0\") " pod="kube-system/storage-provisioner"
	Jun 17 11:43:52 test-preload-392702 kubelet[1105]: I0617 11:43:52.728206    1105 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5335459-09fc-4ca8-baf5-1176a373b395-xtables-lock\") pod \"kube-proxy-gtw27\" (UID: \"a5335459-09fc-4ca8-baf5-1176a373b395\") " pod="kube-system/kube-proxy-gtw27"
	Jun 17 11:43:52 test-preload-392702 kubelet[1105]: I0617 11:43:52.728236    1105 reconciler.go:159] "Reconciler: start to sync state"
	Jun 17 11:43:53 test-preload-392702 kubelet[1105]: I0617 11:43:53.136154    1105 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5abe4afb-bdb7-46ba-bf06-44bb41590c98-config-volume\") pod \"5abe4afb-bdb7-46ba-bf06-44bb41590c98\" (UID: \"5abe4afb-bdb7-46ba-bf06-44bb41590c98\") "
	Jun 17 11:43:53 test-preload-392702 kubelet[1105]: I0617 11:43:53.136306    1105 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdgtb\" (UniqueName: \"kubernetes.io/projected/5abe4afb-bdb7-46ba-bf06-44bb41590c98-kube-api-access-hdgtb\") pod \"5abe4afb-bdb7-46ba-bf06-44bb41590c98\" (UID: \"5abe4afb-bdb7-46ba-bf06-44bb41590c98\") "
	Jun 17 11:43:53 test-preload-392702 kubelet[1105]: E0617 11:43:53.137132    1105 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 17 11:43:53 test-preload-392702 kubelet[1105]: E0617 11:43:53.137312    1105 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/8f33b34a-5d01-427b-bafb-186fd7b858df-config-volume podName:8f33b34a-5d01-427b-bafb-186fd7b858df nodeName:}" failed. No retries permitted until 2024-06-17 11:43:53.637181649 +0000 UTC m=+7.109863704 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f33b34a-5d01-427b-bafb-186fd7b858df-config-volume") pod "coredns-6d4b75cb6d-lsnwr" (UID: "8f33b34a-5d01-427b-bafb-186fd7b858df") : object "kube-system"/"coredns" not registered
	Jun 17 11:43:53 test-preload-392702 kubelet[1105]: W0617 11:43:53.138611    1105 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/5abe4afb-bdb7-46ba-bf06-44bb41590c98/volumes/kubernetes.io~projected/kube-api-access-hdgtb: clearQuota called, but quotas disabled
	Jun 17 11:43:53 test-preload-392702 kubelet[1105]: W0617 11:43:53.138627    1105 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/5abe4afb-bdb7-46ba-bf06-44bb41590c98/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Jun 17 11:43:53 test-preload-392702 kubelet[1105]: I0617 11:43:53.138957    1105 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5abe4afb-bdb7-46ba-bf06-44bb41590c98-kube-api-access-hdgtb" (OuterVolumeSpecName: "kube-api-access-hdgtb") pod "5abe4afb-bdb7-46ba-bf06-44bb41590c98" (UID: "5abe4afb-bdb7-46ba-bf06-44bb41590c98"). InnerVolumeSpecName "kube-api-access-hdgtb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 17 11:43:53 test-preload-392702 kubelet[1105]: I0617 11:43:53.139270    1105 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5abe4afb-bdb7-46ba-bf06-44bb41590c98-config-volume" (OuterVolumeSpecName: "config-volume") pod "5abe4afb-bdb7-46ba-bf06-44bb41590c98" (UID: "5abe4afb-bdb7-46ba-bf06-44bb41590c98"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jun 17 11:43:53 test-preload-392702 kubelet[1105]: I0617 11:43:53.237358    1105 reconciler.go:384] "Volume detached for volume \"kube-api-access-hdgtb\" (UniqueName: \"kubernetes.io/projected/5abe4afb-bdb7-46ba-bf06-44bb41590c98-kube-api-access-hdgtb\") on node \"test-preload-392702\" DevicePath \"\""
	Jun 17 11:43:53 test-preload-392702 kubelet[1105]: I0617 11:43:53.237383    1105 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5abe4afb-bdb7-46ba-bf06-44bb41590c98-config-volume\") on node \"test-preload-392702\" DevicePath \"\""
	Jun 17 11:43:53 test-preload-392702 kubelet[1105]: E0617 11:43:53.640280    1105 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 17 11:43:53 test-preload-392702 kubelet[1105]: E0617 11:43:53.640388    1105 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/8f33b34a-5d01-427b-bafb-186fd7b858df-config-volume podName:8f33b34a-5d01-427b-bafb-186fd7b858df nodeName:}" failed. No retries permitted until 2024-06-17 11:43:54.640372661 +0000 UTC m=+8.113054713 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f33b34a-5d01-427b-bafb-186fd7b858df-config-volume") pod "coredns-6d4b75cb6d-lsnwr" (UID: "8f33b34a-5d01-427b-bafb-186fd7b858df") : object "kube-system"/"coredns" not registered
	Jun 17 11:43:53 test-preload-392702 kubelet[1105]: E0617 11:43:53.795323    1105 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-lsnwr" podUID=8f33b34a-5d01-427b-bafb-186fd7b858df
	Jun 17 11:43:54 test-preload-392702 kubelet[1105]: E0617 11:43:54.652165    1105 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 17 11:43:54 test-preload-392702 kubelet[1105]: E0617 11:43:54.652245    1105 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/8f33b34a-5d01-427b-bafb-186fd7b858df-config-volume podName:8f33b34a-5d01-427b-bafb-186fd7b858df nodeName:}" failed. No retries permitted until 2024-06-17 11:43:56.652231342 +0000 UTC m=+10.124913383 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f33b34a-5d01-427b-bafb-186fd7b858df-config-volume") pod "coredns-6d4b75cb6d-lsnwr" (UID: "8f33b34a-5d01-427b-bafb-186fd7b858df") : object "kube-system"/"coredns" not registered
	Jun 17 11:43:54 test-preload-392702 kubelet[1105]: I0617 11:43:54.799388    1105 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=5abe4afb-bdb7-46ba-bf06-44bb41590c98 path="/var/lib/kubelet/pods/5abe4afb-bdb7-46ba-bf06-44bb41590c98/volumes"
	Jun 17 11:43:54 test-preload-392702 kubelet[1105]: I0617 11:43:54.836844    1105 scope.go:110] "RemoveContainer" containerID="672d627734199d93b5edcc5ca78ab31154738527dcd7b2a5bff08ef68c20cbd6"
	Jun 17 11:43:55 test-preload-392702 kubelet[1105]: E0617 11:43:55.794853    1105 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-lsnwr" podUID=8f33b34a-5d01-427b-bafb-186fd7b858df
	Jun 17 11:43:56 test-preload-392702 kubelet[1105]: E0617 11:43:56.664790    1105 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 17 11:43:56 test-preload-392702 kubelet[1105]: E0617 11:43:56.664865    1105 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/8f33b34a-5d01-427b-bafb-186fd7b858df-config-volume podName:8f33b34a-5d01-427b-bafb-186fd7b858df nodeName:}" failed. No retries permitted until 2024-06-17 11:44:00.664850867 +0000 UTC m=+14.137532907 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f33b34a-5d01-427b-bafb-186fd7b858df-config-volume") pod "coredns-6d4b75cb6d-lsnwr" (UID: "8f33b34a-5d01-427b-bafb-186fd7b858df") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [672d627734199d93b5edcc5ca78ab31154738527dcd7b2a5bff08ef68c20cbd6] <==
	I0617 11:43:54.212246       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0617 11:43:54.214734       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [6a5d0c65b22c1ee4e51250258ff2ff26b5bf71e60e5ff9b6492c4444a1f63d40] <==
	I0617 11:43:54.917580       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0617 11:43:54.925647       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0617 11:43:54.925749       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-392702 -n test-preload-392702
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-392702 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-392702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-392702
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-392702: (1.135280895s)
--- FAIL: TestPreload (168.47s)

                                                
                                    
x
+
TestKubernetesUpgrade (371.15s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-717156 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-717156 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m6.667850327s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-717156] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19084
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-717156" primary control-plane node in "kubernetes-upgrade-717156" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:49:48.540511  159952 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:49:48.540604  159952 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:49:48.540612  159952 out.go:304] Setting ErrFile to fd 2...
	I0617 11:49:48.540616  159952 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:49:48.540779  159952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:49:48.541327  159952 out.go:298] Setting JSON to false
	I0617 11:49:48.542251  159952 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5536,"bootTime":1718619453,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:49:48.542311  159952 start.go:139] virtualization: kvm guest
	I0617 11:49:48.544523  159952 out.go:177] * [kubernetes-upgrade-717156] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:49:48.545954  159952 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:49:48.545964  159952 notify.go:220] Checking for updates...
	I0617 11:49:48.547084  159952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:49:48.548390  159952 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:49:48.549597  159952 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:49:48.550825  159952 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:49:48.552012  159952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:49:48.553507  159952 config.go:182] Loaded profile config "NoKubernetes-846787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0617 11:49:48.553612  159952 config.go:182] Loaded profile config "cert-expiration-514753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:49:48.553702  159952 config.go:182] Loaded profile config "force-systemd-flag-855883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:49:48.553814  159952 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:49:48.594370  159952 out.go:177] * Using the kvm2 driver based on user configuration
	I0617 11:49:48.595584  159952 start.go:297] selected driver: kvm2
	I0617 11:49:48.595608  159952 start.go:901] validating driver "kvm2" against <nil>
	I0617 11:49:48.595631  159952 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:49:48.596346  159952 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:49:48.596438  159952 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 11:49:48.611121  159952 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 11:49:48.611165  159952 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 11:49:48.611351  159952 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0617 11:49:48.611373  159952 cni.go:84] Creating CNI manager for ""
	I0617 11:49:48.611381  159952 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:49:48.611390  159952 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 11:49:48.611436  159952 start.go:340] cluster config:
	{Name:kubernetes-upgrade-717156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-717156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:49:48.611555  159952 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:49:48.613139  159952 out.go:177] * Starting "kubernetes-upgrade-717156" primary control-plane node in "kubernetes-upgrade-717156" cluster
	I0617 11:49:48.614223  159952 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0617 11:49:48.614253  159952 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0617 11:49:48.614270  159952 cache.go:56] Caching tarball of preloaded images
	I0617 11:49:48.614343  159952 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:49:48.614353  159952 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0617 11:49:48.614427  159952 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/config.json ...
	I0617 11:49:48.614442  159952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/config.json: {Name:mk823c3db7387d0b3b1962b053f3965b454edc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:49:48.614554  159952 start.go:360] acquireMachinesLock for kubernetes-upgrade-717156: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:50:24.956863  159952 start.go:364] duration metric: took 36.34225309s to acquireMachinesLock for "kubernetes-upgrade-717156"
	I0617 11:50:24.956939  159952 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-717156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-717156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:50:24.957088  159952 start.go:125] createHost starting for "" (driver="kvm2")
	I0617 11:50:24.959439  159952 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 11:50:24.959709  159952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:50:24.959812  159952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:50:24.980868  159952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42831
	I0617 11:50:24.981352  159952 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:50:24.982012  159952 main.go:141] libmachine: Using API Version  1
	I0617 11:50:24.982037  159952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:50:24.982459  159952 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:50:24.982701  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetMachineName
	I0617 11:50:24.982871  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .DriverName
	I0617 11:50:24.983026  159952 start.go:159] libmachine.API.Create for "kubernetes-upgrade-717156" (driver="kvm2")
	I0617 11:50:24.983058  159952 client.go:168] LocalClient.Create starting
	I0617 11:50:24.983096  159952 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem
	I0617 11:50:24.983136  159952 main.go:141] libmachine: Decoding PEM data...
	I0617 11:50:24.983157  159952 main.go:141] libmachine: Parsing certificate...
	I0617 11:50:24.983246  159952 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem
	I0617 11:50:24.983274  159952 main.go:141] libmachine: Decoding PEM data...
	I0617 11:50:24.983288  159952 main.go:141] libmachine: Parsing certificate...
	I0617 11:50:24.983322  159952 main.go:141] libmachine: Running pre-create checks...
	I0617 11:50:24.983335  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .PreCreateCheck
	I0617 11:50:24.983733  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetConfigRaw
	I0617 11:50:24.984227  159952 main.go:141] libmachine: Creating machine...
	I0617 11:50:24.984246  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .Create
	I0617 11:50:24.984381  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Creating KVM machine...
	I0617 11:50:24.985620  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found existing default KVM network
	I0617 11:50:24.988352  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:24.988175  160386 network.go:209] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0617 11:50:24.989575  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:24.989485  160386 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112ba0}
	I0617 11:50:24.989636  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | created network xml: 
	I0617 11:50:24.989655  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | <network>
	I0617 11:50:24.989666  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG |   <name>mk-kubernetes-upgrade-717156</name>
	I0617 11:50:24.989694  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG |   <dns enable='no'/>
	I0617 11:50:24.989704  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG |   
	I0617 11:50:24.989714  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0617 11:50:24.989721  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG |     <dhcp>
	I0617 11:50:24.989729  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0617 11:50:24.989758  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG |     </dhcp>
	I0617 11:50:24.989785  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG |   </ip>
	I0617 11:50:24.989798  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG |   
	I0617 11:50:24.989806  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | </network>
	I0617 11:50:24.989820  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | 
	I0617 11:50:24.995383  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | trying to create private KVM network mk-kubernetes-upgrade-717156 192.168.50.0/24...
	I0617 11:50:25.069095  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | private KVM network mk-kubernetes-upgrade-717156 192.168.50.0/24 created
	I0617 11:50:25.069140  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Setting up store path in /home/jenkins/minikube-integration/19084-112967/.minikube/machines/kubernetes-upgrade-717156 ...
	I0617 11:50:25.069157  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:25.069037  160386 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:50:25.069171  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Building disk image from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso
	I0617 11:50:25.069198  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Downloading /home/jenkins/minikube-integration/19084-112967/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0617 11:50:25.320500  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:25.320291  160386 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/kubernetes-upgrade-717156/id_rsa...
	I0617 11:50:25.415519  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:25.415367  160386 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/kubernetes-upgrade-717156/kubernetes-upgrade-717156.rawdisk...
	I0617 11:50:25.415560  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | Writing magic tar header
	I0617 11:50:25.415576  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | Writing SSH key tar header
	I0617 11:50:25.415584  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:25.415518  160386 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/kubernetes-upgrade-717156 ...
	I0617 11:50:25.415609  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/kubernetes-upgrade-717156
	I0617 11:50:25.415647  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/kubernetes-upgrade-717156 (perms=drwx------)
	I0617 11:50:25.415694  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines
	I0617 11:50:25.415741  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines (perms=drwxr-xr-x)
	I0617 11:50:25.415758  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:50:25.415797  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967
	I0617 11:50:25.415825  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube (perms=drwxr-xr-x)
	I0617 11:50:25.415838  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0617 11:50:25.415853  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | Checking permissions on dir: /home/jenkins
	I0617 11:50:25.415861  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | Checking permissions on dir: /home
	I0617 11:50:25.415875  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | Skipping /home - not owner
	I0617 11:50:25.415894  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967 (perms=drwxrwxr-x)
	I0617 11:50:25.415921  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0617 11:50:25.415940  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0617 11:50:25.415947  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Creating domain...
	I0617 11:50:25.417003  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) define libvirt domain using xml: 
	I0617 11:50:25.417026  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) <domain type='kvm'>
	I0617 11:50:25.417034  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)   <name>kubernetes-upgrade-717156</name>
	I0617 11:50:25.417039  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)   <memory unit='MiB'>2200</memory>
	I0617 11:50:25.417045  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)   <vcpu>2</vcpu>
	I0617 11:50:25.417049  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)   <features>
	I0617 11:50:25.417054  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     <acpi/>
	I0617 11:50:25.417059  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     <apic/>
	I0617 11:50:25.417072  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     <pae/>
	I0617 11:50:25.417080  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     
	I0617 11:50:25.417085  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)   </features>
	I0617 11:50:25.417090  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)   <cpu mode='host-passthrough'>
	I0617 11:50:25.417098  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)   
	I0617 11:50:25.417102  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)   </cpu>
	I0617 11:50:25.417110  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)   <os>
	I0617 11:50:25.417118  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     <type>hvm</type>
	I0617 11:50:25.417147  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     <boot dev='cdrom'/>
	I0617 11:50:25.417172  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     <boot dev='hd'/>
	I0617 11:50:25.417183  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     <bootmenu enable='no'/>
	I0617 11:50:25.417190  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)   </os>
	I0617 11:50:25.417206  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)   <devices>
	I0617 11:50:25.417215  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     <disk type='file' device='cdrom'>
	I0617 11:50:25.417237  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/kubernetes-upgrade-717156/boot2docker.iso'/>
	I0617 11:50:25.417249  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)       <target dev='hdc' bus='scsi'/>
	I0617 11:50:25.417287  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)       <readonly/>
	I0617 11:50:25.417318  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     </disk>
	I0617 11:50:25.417341  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     <disk type='file' device='disk'>
	I0617 11:50:25.417357  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0617 11:50:25.417372  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/kubernetes-upgrade-717156/kubernetes-upgrade-717156.rawdisk'/>
	I0617 11:50:25.417392  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)       <target dev='hda' bus='virtio'/>
	I0617 11:50:25.417405  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     </disk>
	I0617 11:50:25.417420  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     <interface type='network'>
	I0617 11:50:25.417435  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)       <source network='mk-kubernetes-upgrade-717156'/>
	I0617 11:50:25.417447  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)       <model type='virtio'/>
	I0617 11:50:25.417459  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     </interface>
	I0617 11:50:25.417470  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     <interface type='network'>
	I0617 11:50:25.417482  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)       <source network='default'/>
	I0617 11:50:25.417494  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)       <model type='virtio'/>
	I0617 11:50:25.417505  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     </interface>
	I0617 11:50:25.417514  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     <serial type='pty'>
	I0617 11:50:25.417527  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)       <target port='0'/>
	I0617 11:50:25.417537  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     </serial>
	I0617 11:50:25.417550  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     <console type='pty'>
	I0617 11:50:25.417561  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)       <target type='serial' port='0'/>
	I0617 11:50:25.417583  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     </console>
	I0617 11:50:25.417603  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     <rng model='virtio'>
	I0617 11:50:25.417617  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)       <backend model='random'>/dev/random</backend>
	I0617 11:50:25.417628  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     </rng>
	I0617 11:50:25.417638  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     
	I0617 11:50:25.417647  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)     
	I0617 11:50:25.417656  159952 main.go:141] libmachine: (kubernetes-upgrade-717156)   </devices>
	I0617 11:50:25.417671  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) </domain>
	I0617 11:50:25.417703  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) 
	I0617 11:50:25.421911  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:49:6f:da in network default
	I0617 11:50:25.422625  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Ensuring networks are active...
	I0617 11:50:25.422643  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:25.423383  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Ensuring network default is active
	I0617 11:50:25.423698  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Ensuring network mk-kubernetes-upgrade-717156 is active
	I0617 11:50:25.424223  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Getting domain xml...
	I0617 11:50:25.424945  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Creating domain...
	I0617 11:50:26.690125  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Waiting to get IP...
	I0617 11:50:26.691055  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:26.691450  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | unable to find current IP address of domain kubernetes-upgrade-717156 in network mk-kubernetes-upgrade-717156
	I0617 11:50:26.691494  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:26.691432  160386 retry.go:31] will retry after 294.768688ms: waiting for machine to come up
	I0617 11:50:26.990060  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:26.990837  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | unable to find current IP address of domain kubernetes-upgrade-717156 in network mk-kubernetes-upgrade-717156
	I0617 11:50:26.990870  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:26.990801  160386 retry.go:31] will retry after 360.254017ms: waiting for machine to come up
	I0617 11:50:27.671548  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:27.672041  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | unable to find current IP address of domain kubernetes-upgrade-717156 in network mk-kubernetes-upgrade-717156
	I0617 11:50:27.672069  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:27.671983  160386 retry.go:31] will retry after 445.706503ms: waiting for machine to come up
	I0617 11:50:28.119845  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:28.120345  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | unable to find current IP address of domain kubernetes-upgrade-717156 in network mk-kubernetes-upgrade-717156
	I0617 11:50:28.120368  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:28.120301  160386 retry.go:31] will retry after 505.421203ms: waiting for machine to come up
	I0617 11:50:28.627809  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:28.628618  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | unable to find current IP address of domain kubernetes-upgrade-717156 in network mk-kubernetes-upgrade-717156
	I0617 11:50:28.628648  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:28.628565  160386 retry.go:31] will retry after 536.418306ms: waiting for machine to come up
	I0617 11:50:29.166363  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:29.167114  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | unable to find current IP address of domain kubernetes-upgrade-717156 in network mk-kubernetes-upgrade-717156
	I0617 11:50:29.167137  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:29.167055  160386 retry.go:31] will retry after 854.628632ms: waiting for machine to come up
	I0617 11:50:30.023810  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:30.024348  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | unable to find current IP address of domain kubernetes-upgrade-717156 in network mk-kubernetes-upgrade-717156
	I0617 11:50:30.024368  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:30.024310  160386 retry.go:31] will retry after 1.067704528s: waiting for machine to come up
	I0617 11:50:31.093555  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:31.093951  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | unable to find current IP address of domain kubernetes-upgrade-717156 in network mk-kubernetes-upgrade-717156
	I0617 11:50:31.093981  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:31.093904  160386 retry.go:31] will retry after 1.177707012s: waiting for machine to come up
	I0617 11:50:32.273593  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:32.274028  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | unable to find current IP address of domain kubernetes-upgrade-717156 in network mk-kubernetes-upgrade-717156
	I0617 11:50:32.274056  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:32.273985  160386 retry.go:31] will retry after 1.589403427s: waiting for machine to come up
	I0617 11:50:33.864781  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:33.865310  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | unable to find current IP address of domain kubernetes-upgrade-717156 in network mk-kubernetes-upgrade-717156
	I0617 11:50:33.865348  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:33.865237  160386 retry.go:31] will retry after 1.478677631s: waiting for machine to come up
	I0617 11:50:35.345137  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:35.345647  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | unable to find current IP address of domain kubernetes-upgrade-717156 in network mk-kubernetes-upgrade-717156
	I0617 11:50:35.345693  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:35.345592  160386 retry.go:31] will retry after 1.976839093s: waiting for machine to come up
	I0617 11:50:37.324683  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:37.325166  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | unable to find current IP address of domain kubernetes-upgrade-717156 in network mk-kubernetes-upgrade-717156
	I0617 11:50:37.325196  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:37.325121  160386 retry.go:31] will retry after 2.873408875s: waiting for machine to come up
	I0617 11:50:40.200491  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:40.200943  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | unable to find current IP address of domain kubernetes-upgrade-717156 in network mk-kubernetes-upgrade-717156
	I0617 11:50:40.200968  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:40.200877  160386 retry.go:31] will retry after 3.349753856s: waiting for machine to come up
	I0617 11:50:43.552577  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:43.552989  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | unable to find current IP address of domain kubernetes-upgrade-717156 in network mk-kubernetes-upgrade-717156
	I0617 11:50:43.553019  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | I0617 11:50:43.552944  160386 retry.go:31] will retry after 5.415817553s: waiting for machine to come up
	I0617 11:50:48.973821  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:48.974331  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has current primary IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:48.974369  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Found IP for machine: 192.168.50.236
	I0617 11:50:48.974383  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Reserving static IP address...
	I0617 11:50:48.974769  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-717156", mac: "52:54:00:0c:c6:52", ip: "192.168.50.236"} in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.048549  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | Getting to WaitForSSH function...
	I0617 11:50:49.048604  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Reserved static IP address: 192.168.50.236
	I0617 11:50:49.048621  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Waiting for SSH to be available...
	I0617 11:50:49.051151  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.051552  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0c:c6:52}
	I0617 11:50:49.051589  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.051754  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | Using SSH client type: external
	I0617 11:50:49.051776  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/kubernetes-upgrade-717156/id_rsa (-rw-------)
	I0617 11:50:49.051816  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.236 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/kubernetes-upgrade-717156/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 11:50:49.051847  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | About to run SSH command:
	I0617 11:50:49.051889  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | exit 0
	I0617 11:50:49.175420  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | SSH cmd err, output: <nil>: 
	I0617 11:50:49.175709  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) KVM machine creation complete!
	I0617 11:50:49.176070  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetConfigRaw
	I0617 11:50:49.176607  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .DriverName
	I0617 11:50:49.176794  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .DriverName
	I0617 11:50:49.176951  159952 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0617 11:50:49.176970  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetState
	I0617 11:50:49.178358  159952 main.go:141] libmachine: Detecting operating system of created instance...
	I0617 11:50:49.178372  159952 main.go:141] libmachine: Waiting for SSH to be available...
	I0617 11:50:49.178378  159952 main.go:141] libmachine: Getting to WaitForSSH function...
	I0617 11:50:49.178385  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:50:49.180663  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.181106  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:50:49.181159  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.181325  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:50:49.181513  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:50:49.181675  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:50:49.181772  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:50:49.181893  159952 main.go:141] libmachine: Using SSH client type: native
	I0617 11:50:49.182186  159952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0617 11:50:49.182198  159952 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0617 11:50:49.283029  159952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:50:49.283055  159952 main.go:141] libmachine: Detecting the provisioner...
	I0617 11:50:49.283063  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:50:49.285939  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.286296  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:50:49.286329  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.286561  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:50:49.286766  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:50:49.286939  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:50:49.287120  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:50:49.287281  159952 main.go:141] libmachine: Using SSH client type: native
	I0617 11:50:49.287534  159952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0617 11:50:49.287558  159952 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0617 11:50:49.388516  159952 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0617 11:50:49.388631  159952 main.go:141] libmachine: found compatible host: buildroot
	I0617 11:50:49.388645  159952 main.go:141] libmachine: Provisioning with buildroot...
	I0617 11:50:49.388653  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetMachineName
	I0617 11:50:49.388887  159952 buildroot.go:166] provisioning hostname "kubernetes-upgrade-717156"
	I0617 11:50:49.388913  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetMachineName
	I0617 11:50:49.389129  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:50:49.391879  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.392252  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:50:49.392284  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.392363  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:50:49.392550  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:50:49.392698  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:50:49.392850  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:50:49.393033  159952 main.go:141] libmachine: Using SSH client type: native
	I0617 11:50:49.393200  159952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0617 11:50:49.393214  159952 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-717156 && echo "kubernetes-upgrade-717156" | sudo tee /etc/hostname
	I0617 11:50:49.519873  159952 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-717156
	
	I0617 11:50:49.519910  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:50:49.522576  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.522921  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:50:49.522952  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.523120  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:50:49.523313  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:50:49.523535  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:50:49.523690  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:50:49.523879  159952 main.go:141] libmachine: Using SSH client type: native
	I0617 11:50:49.524041  159952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0617 11:50:49.524058  159952 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-717156' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-717156/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-717156' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 11:50:49.632346  159952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:50:49.632376  159952 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 11:50:49.632413  159952 buildroot.go:174] setting up certificates
	I0617 11:50:49.632425  159952 provision.go:84] configureAuth start
	I0617 11:50:49.632438  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetMachineName
	I0617 11:50:49.632781  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetIP
	I0617 11:50:49.635396  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.635780  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:50:49.635805  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.635968  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:50:49.638061  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.638350  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:50:49.638377  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.638546  159952 provision.go:143] copyHostCerts
	I0617 11:50:49.638621  159952 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 11:50:49.638634  159952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:50:49.638702  159952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 11:50:49.638856  159952 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 11:50:49.638867  159952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:50:49.638909  159952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 11:50:49.638990  159952 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 11:50:49.639000  159952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:50:49.639027  159952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 11:50:49.639090  159952 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-717156 san=[127.0.0.1 192.168.50.236 kubernetes-upgrade-717156 localhost minikube]
	I0617 11:50:49.734185  159952 provision.go:177] copyRemoteCerts
	I0617 11:50:49.734248  159952 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 11:50:49.734269  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:50:49.736682  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.737017  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:50:49.737078  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.737193  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:50:49.737411  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:50:49.737592  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:50:49.737764  159952 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/kubernetes-upgrade-717156/id_rsa Username:docker}
	I0617 11:50:49.818446  159952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 11:50:49.843236  159952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0617 11:50:49.867864  159952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0617 11:50:49.891589  159952 provision.go:87] duration metric: took 259.148809ms to configureAuth
	I0617 11:50:49.891619  159952 buildroot.go:189] setting minikube options for container-runtime
	I0617 11:50:49.891767  159952 config.go:182] Loaded profile config "kubernetes-upgrade-717156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0617 11:50:49.891855  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:50:49.894272  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.894595  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:50:49.894625  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:49.894776  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:50:49.894957  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:50:49.895142  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:50:49.895323  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:50:49.895579  159952 main.go:141] libmachine: Using SSH client type: native
	I0617 11:50:49.895800  159952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0617 11:50:49.895815  159952 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 11:50:50.155007  159952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 11:50:50.155039  159952 main.go:141] libmachine: Checking connection to Docker...
	I0617 11:50:50.155049  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetURL
	I0617 11:50:50.156390  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | Using libvirt version 6000000
	I0617 11:50:50.158590  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:50.158910  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:50:50.158936  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:50.159081  159952 main.go:141] libmachine: Docker is up and running!
	I0617 11:50:50.159094  159952 main.go:141] libmachine: Reticulating splines...
	I0617 11:50:50.159102  159952 client.go:171] duration metric: took 25.176033517s to LocalClient.Create
	I0617 11:50:50.159122  159952 start.go:167] duration metric: took 25.176099713s to libmachine.API.Create "kubernetes-upgrade-717156"
	I0617 11:50:50.159133  159952 start.go:293] postStartSetup for "kubernetes-upgrade-717156" (driver="kvm2")
	I0617 11:50:50.159143  159952 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 11:50:50.159164  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .DriverName
	I0617 11:50:50.159370  159952 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 11:50:50.159393  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:50:50.161217  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:50.161526  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:50:50.161559  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:50.161620  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:50:50.161812  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:50:50.161980  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:50:50.162147  159952 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/kubernetes-upgrade-717156/id_rsa Username:docker}
	I0617 11:50:50.241665  159952 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 11:50:50.245915  159952 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 11:50:50.245951  159952 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 11:50:50.246022  159952 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 11:50:50.246096  159952 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 11:50:50.246179  159952 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 11:50:50.255539  159952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:50:50.278183  159952 start.go:296] duration metric: took 119.036012ms for postStartSetup
	I0617 11:50:50.278228  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetConfigRaw
	I0617 11:50:50.278847  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetIP
	I0617 11:50:50.281422  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:50.281728  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:50:50.281750  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:50.282125  159952 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/config.json ...
	I0617 11:50:50.282401  159952 start.go:128] duration metric: took 25.32529405s to createHost
	I0617 11:50:50.282429  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:50:50.284640  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:50.284894  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:50:50.284922  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:50.285117  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:50:50.285318  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:50:50.285495  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:50:50.285666  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:50:50.285841  159952 main.go:141] libmachine: Using SSH client type: native
	I0617 11:50:50.286044  159952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0617 11:50:50.286061  159952 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0617 11:50:50.388173  159952 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625050.365235014
	
	I0617 11:50:50.388196  159952 fix.go:216] guest clock: 1718625050.365235014
	I0617 11:50:50.388203  159952 fix.go:229] Guest: 2024-06-17 11:50:50.365235014 +0000 UTC Remote: 2024-06-17 11:50:50.282416874 +0000 UTC m=+61.775681570 (delta=82.81814ms)
	I0617 11:50:50.388221  159952 fix.go:200] guest clock delta is within tolerance: 82.81814ms
	I0617 11:50:50.388227  159952 start.go:83] releasing machines lock for "kubernetes-upgrade-717156", held for 25.431322898s
	I0617 11:50:50.388251  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .DriverName
	I0617 11:50:50.388562  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetIP
	I0617 11:50:50.391313  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:50.391708  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:50:50.391739  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:50.391890  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .DriverName
	I0617 11:50:50.392549  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .DriverName
	I0617 11:50:50.392754  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .DriverName
	I0617 11:50:50.392882  159952 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 11:50:50.392932  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:50:50.393022  159952 ssh_runner.go:195] Run: cat /version.json
	I0617 11:50:50.393048  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:50:50.395642  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:50.395937  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:50.396015  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:50:50.396035  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:50.396149  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:50:50.396328  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:50:50.396398  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:50:50.396440  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:50.396594  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:50:50.396635  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:50:50.396755  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:50:50.396832  159952 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/kubernetes-upgrade-717156/id_rsa Username:docker}
	I0617 11:50:50.396892  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:50:50.397059  159952 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/kubernetes-upgrade-717156/id_rsa Username:docker}
	I0617 11:50:50.501766  159952 ssh_runner.go:195] Run: systemctl --version
	I0617 11:50:50.510722  159952 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 11:50:50.687907  159952 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 11:50:50.694339  159952 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 11:50:50.694436  159952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 11:50:50.711617  159952 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 11:50:50.711647  159952 start.go:494] detecting cgroup driver to use...
	I0617 11:50:50.711739  159952 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 11:50:50.728236  159952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 11:50:50.742971  159952 docker.go:217] disabling cri-docker service (if available) ...
	I0617 11:50:50.743041  159952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 11:50:50.756919  159952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 11:50:50.771749  159952 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 11:50:50.901442  159952 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 11:50:51.054169  159952 docker.go:233] disabling docker service ...
	I0617 11:50:51.054240  159952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 11:50:51.069917  159952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 11:50:51.082993  159952 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 11:50:51.208041  159952 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 11:50:51.319440  159952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 11:50:51.334348  159952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 11:50:51.352691  159952 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0617 11:50:51.352741  159952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:50:51.363229  159952 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 11:50:51.363289  159952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:50:51.373620  159952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:50:51.383734  159952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:50:51.393846  159952 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 11:50:51.404216  159952 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 11:50:51.413130  159952 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 11:50:51.413180  159952 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 11:50:51.425330  159952 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 11:50:51.435035  159952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:50:51.545058  159952 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 11:50:51.702129  159952 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 11:50:51.702197  159952 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 11:50:51.707172  159952 start.go:562] Will wait 60s for crictl version
	I0617 11:50:51.707234  159952 ssh_runner.go:195] Run: which crictl
	I0617 11:50:51.711304  159952 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 11:50:51.753464  159952 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 11:50:51.753544  159952 ssh_runner.go:195] Run: crio --version
	I0617 11:50:51.787221  159952 ssh_runner.go:195] Run: crio --version
	I0617 11:50:51.817178  159952 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0617 11:50:51.819325  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetIP
	I0617 11:50:51.824320  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:51.824917  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:50:51.824947  159952 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:50:51.825273  159952 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0617 11:50:51.830973  159952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:50:51.843993  159952 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-717156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-717156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.236 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 11:50:51.844121  159952 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0617 11:50:51.844176  159952 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:50:51.878251  159952 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0617 11:50:51.878321  159952 ssh_runner.go:195] Run: which lz4
	I0617 11:50:51.882721  159952 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0617 11:50:51.887568  159952 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 11:50:51.887610  159952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0617 11:50:53.725926  159952 crio.go:462] duration metric: took 1.843236824s to copy over tarball
	I0617 11:50:53.726024  159952 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 11:50:56.325082  159952 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.599010076s)
	I0617 11:50:56.325126  159952 crio.go:469] duration metric: took 2.599168798s to extract the tarball
	I0617 11:50:56.325137  159952 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 11:50:56.369133  159952 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:50:56.414735  159952 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0617 11:50:56.414767  159952 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 11:50:56.414854  159952 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 11:50:56.414852  159952 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0617 11:50:56.414952  159952 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0617 11:50:56.414972  159952 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0617 11:50:56.414923  159952 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 11:50:56.415209  159952 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 11:50:56.414852  159952 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 11:50:56.414946  159952 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 11:50:56.417102  159952 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 11:50:56.417160  159952 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0617 11:50:56.417185  159952 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 11:50:56.417224  159952 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 11:50:56.417254  159952 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0617 11:50:56.417257  159952 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 11:50:56.417346  159952 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0617 11:50:56.417568  159952 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 11:50:56.575328  159952 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0617 11:50:56.575687  159952 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0617 11:50:56.585154  159952 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0617 11:50:56.586924  159952 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0617 11:50:56.591070  159952 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0617 11:50:56.596286  159952 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 11:50:56.663158  159952 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0617 11:50:56.663230  159952 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0617 11:50:56.663282  159952 ssh_runner.go:195] Run: which crictl
	I0617 11:50:56.696191  159952 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0617 11:50:56.696247  159952 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 11:50:56.696297  159952 ssh_runner.go:195] Run: which crictl
	I0617 11:50:56.729466  159952 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 11:50:56.739565  159952 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0617 11:50:56.739595  159952 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0617 11:50:56.739626  159952 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0617 11:50:56.739648  159952 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 11:50:56.739674  159952 ssh_runner.go:195] Run: which crictl
	I0617 11:50:56.739691  159952 ssh_runner.go:195] Run: which crictl
	I0617 11:50:56.739735  159952 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0617 11:50:56.739765  159952 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 11:50:56.739807  159952 ssh_runner.go:195] Run: which crictl
	I0617 11:50:56.739852  159952 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0617 11:50:56.739858  159952 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0617 11:50:56.739883  159952 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 11:50:56.739896  159952 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0617 11:50:56.739920  159952 ssh_runner.go:195] Run: which crictl
	I0617 11:50:56.750092  159952 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0617 11:50:56.948626  159952 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0617 11:50:56.948680  159952 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0617 11:50:56.948625  159952 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 11:50:56.948776  159952 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0617 11:50:56.948809  159952 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0617 11:50:56.948867  159952 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0617 11:50:56.948894  159952 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0617 11:50:56.948931  159952 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0617 11:50:56.948968  159952 ssh_runner.go:195] Run: which crictl
	I0617 11:50:57.043338  159952 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0617 11:50:57.043721  159952 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0617 11:50:57.047023  159952 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0617 11:50:57.047026  159952 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0617 11:50:57.047085  159952 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0617 11:50:57.084849  159952 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0617 11:50:57.084927  159952 cache_images.go:92] duration metric: took 670.145915ms to LoadCachedImages
	W0617 11:50:57.084992  159952 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0617 11:50:57.085011  159952 kubeadm.go:928] updating node { 192.168.50.236 8443 v1.20.0 crio true true} ...
	I0617 11:50:57.085122  159952 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-717156 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-717156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 11:50:57.085206  159952 ssh_runner.go:195] Run: crio config
	I0617 11:50:57.135044  159952 cni.go:84] Creating CNI manager for ""
	I0617 11:50:57.135079  159952 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:50:57.135097  159952 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 11:50:57.135127  159952 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.236 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-717156 NodeName:kubernetes-upgrade-717156 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0617 11:50:57.135332  159952 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.236
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-717156"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.236
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.236"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 11:50:57.135404  159952 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0617 11:50:57.146615  159952 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 11:50:57.146688  159952 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 11:50:57.156919  159952 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0617 11:50:57.174250  159952 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 11:50:57.191690  159952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0617 11:50:57.209740  159952 ssh_runner.go:195] Run: grep 192.168.50.236	control-plane.minikube.internal$ /etc/hosts
	I0617 11:50:57.213683  159952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.236	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:50:57.226475  159952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:50:57.372464  159952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:50:57.395169  159952 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156 for IP: 192.168.50.236
	I0617 11:50:57.395196  159952 certs.go:194] generating shared ca certs ...
	I0617 11:50:57.395217  159952 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:50:57.395400  159952 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 11:50:57.395453  159952 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 11:50:57.395497  159952 certs.go:256] generating profile certs ...
	I0617 11:50:57.395573  159952 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/client.key
	I0617 11:50:57.395593  159952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/client.crt with IP's: []
	I0617 11:50:57.655125  159952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/client.crt ...
	I0617 11:50:57.655156  159952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/client.crt: {Name:mkb32f09084600d31336f682dfce4c251a3b192b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:50:57.655324  159952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/client.key ...
	I0617 11:50:57.655337  159952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/client.key: {Name:mk0f37bffc164571c123f1d982258bd18e66e6d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:50:57.655411  159952 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/apiserver.key.f24592da
	I0617 11:50:57.655433  159952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/apiserver.crt.f24592da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.236]
	I0617 11:50:57.775401  159952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/apiserver.crt.f24592da ...
	I0617 11:50:57.775427  159952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/apiserver.crt.f24592da: {Name:mk7de8196a6ed6a16bc7bce32c320fb1fcb0c30c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:50:57.775629  159952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/apiserver.key.f24592da ...
	I0617 11:50:57.775648  159952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/apiserver.key.f24592da: {Name:mk1c9ecb5f5b5dc66659d02820ec7d1d37e352b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:50:57.775766  159952 certs.go:381] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/apiserver.crt.f24592da -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/apiserver.crt
	I0617 11:50:57.775888  159952 certs.go:385] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/apiserver.key.f24592da -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/apiserver.key
	I0617 11:50:57.775954  159952 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/proxy-client.key
	I0617 11:50:57.775978  159952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/proxy-client.crt with IP's: []
	I0617 11:50:58.023056  159952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/proxy-client.crt ...
	I0617 11:50:58.023094  159952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/proxy-client.crt: {Name:mk609161ba460e88aa0da8df985f42d8ad4a4d41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:50:58.023285  159952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/proxy-client.key ...
	I0617 11:50:58.023304  159952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/proxy-client.key: {Name:mka71a724aa96b38fb4b11ab39affa9d060147f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:50:58.023519  159952 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 11:50:58.023568  159952 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 11:50:58.023590  159952 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 11:50:58.023635  159952 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 11:50:58.023673  159952 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 11:50:58.023713  159952 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 11:50:58.023775  159952 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:50:58.024438  159952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 11:50:58.055415  159952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 11:50:58.080131  159952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 11:50:58.113962  159952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 11:50:58.139623  159952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0617 11:50:58.165104  159952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0617 11:50:58.192618  159952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 11:50:58.231082  159952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 11:50:58.262540  159952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 11:50:58.305849  159952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 11:50:58.343659  159952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 11:50:58.370555  159952 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 11:50:58.393001  159952 ssh_runner.go:195] Run: openssl version
	I0617 11:50:58.401402  159952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 11:50:58.417932  159952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:50:58.422865  159952 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:50:58.422955  159952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:50:58.429272  159952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 11:50:58.441286  159952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 11:50:58.454965  159952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 11:50:58.461143  159952 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 11:50:58.461218  159952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 11:50:58.467943  159952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 11:50:58.480581  159952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 11:50:58.492481  159952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 11:50:58.497229  159952 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 11:50:58.497300  159952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 11:50:58.503350  159952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 11:50:58.515057  159952 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 11:50:58.519398  159952 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0617 11:50:58.519478  159952 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-717156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-717156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.236 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:50:58.519582  159952 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 11:50:58.519638  159952 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 11:50:58.560745  159952 cri.go:89] found id: ""
	I0617 11:50:58.721341  159952 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0617 11:50:58.735173  159952 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 11:50:58.747482  159952 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 11:50:58.758282  159952 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 11:50:58.758312  159952 kubeadm.go:156] found existing configuration files:
	
	I0617 11:50:58.758372  159952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 11:50:58.769816  159952 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 11:50:58.769891  159952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 11:50:58.780139  159952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 11:50:58.789736  159952 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 11:50:58.789799  159952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 11:50:58.800039  159952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 11:50:58.810488  159952 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 11:50:58.810552  159952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 11:50:58.823209  159952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 11:50:58.833416  159952 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 11:50:58.833496  159952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 11:50:58.844457  159952 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 11:50:58.991491  159952 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0617 11:50:58.991584  159952 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 11:50:59.201432  159952 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 11:50:59.201602  159952 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 11:50:59.201753  159952 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 11:50:59.422590  159952 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 11:50:59.424596  159952 out.go:204]   - Generating certificates and keys ...
	I0617 11:50:59.428852  159952 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 11:50:59.428941  159952 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 11:50:59.745995  159952 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0617 11:50:59.894041  159952 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0617 11:50:59.966153  159952 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0617 11:51:00.541214  159952 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0617 11:51:00.899518  159952 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0617 11:51:00.899937  159952 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-717156 localhost] and IPs [192.168.50.236 127.0.0.1 ::1]
	I0617 11:51:01.249553  159952 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0617 11:51:01.249764  159952 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-717156 localhost] and IPs [192.168.50.236 127.0.0.1 ::1]
	I0617 11:51:01.373681  159952 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0617 11:51:01.711906  159952 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0617 11:51:01.861448  159952 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0617 11:51:01.862443  159952 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 11:51:02.025985  159952 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 11:51:02.173949  159952 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 11:51:02.378371  159952 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 11:51:02.490834  159952 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 11:51:02.511331  159952 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 11:51:02.512640  159952 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 11:51:02.512780  159952 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 11:51:02.665486  159952 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 11:51:02.667406  159952 out.go:204]   - Booting up control plane ...
	I0617 11:51:02.667552  159952 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 11:51:02.680492  159952 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 11:51:02.682573  159952 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 11:51:02.684120  159952 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 11:51:02.688896  159952 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 11:51:42.684038  159952 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0617 11:51:42.684751  159952 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:51:42.684999  159952 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:51:47.685199  159952 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:51:47.685503  159952 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:51:57.684573  159952 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:51:57.684873  159952 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:52:17.684503  159952 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:52:17.684756  159952 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:52:57.685946  159952 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:52:57.686682  159952 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:52:57.686719  159952 kubeadm.go:309] 
	I0617 11:52:57.686799  159952 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0617 11:52:57.686859  159952 kubeadm.go:309] 		timed out waiting for the condition
	I0617 11:52:57.686887  159952 kubeadm.go:309] 
	I0617 11:52:57.686942  159952 kubeadm.go:309] 	This error is likely caused by:
	I0617 11:52:57.686997  159952 kubeadm.go:309] 		- The kubelet is not running
	I0617 11:52:57.687140  159952 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0617 11:52:57.687150  159952 kubeadm.go:309] 
	I0617 11:52:57.687279  159952 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0617 11:52:57.687322  159952 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0617 11:52:57.687362  159952 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0617 11:52:57.687372  159952 kubeadm.go:309] 
	I0617 11:52:57.687514  159952 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0617 11:52:57.687623  159952 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0617 11:52:57.687638  159952 kubeadm.go:309] 
	I0617 11:52:57.687768  159952 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0617 11:52:57.687894  159952 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0617 11:52:57.687991  159952 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0617 11:52:57.688091  159952 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0617 11:52:57.688101  159952 kubeadm.go:309] 
	I0617 11:52:57.688406  159952 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 11:52:57.688524  159952 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0617 11:52:57.688616  159952 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0617 11:52:57.688777  159952 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-717156 localhost] and IPs [192.168.50.236 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-717156 localhost] and IPs [192.168.50.236 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-717156 localhost] and IPs [192.168.50.236 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-717156 localhost] and IPs [192.168.50.236 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0617 11:52:57.688850  159952 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 11:52:58.229757  159952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:52:58.244173  159952 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 11:52:58.253985  159952 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 11:52:58.254002  159952 kubeadm.go:156] found existing configuration files:
	
	I0617 11:52:58.254053  159952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 11:52:58.263432  159952 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 11:52:58.263503  159952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 11:52:58.272888  159952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 11:52:58.281615  159952 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 11:52:58.281657  159952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 11:52:58.291023  159952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 11:52:58.300142  159952 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 11:52:58.300188  159952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 11:52:58.309675  159952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 11:52:58.319370  159952 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 11:52:58.319422  159952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 11:52:58.329462  159952 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 11:52:58.394406  159952 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0617 11:52:58.394467  159952 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 11:52:58.533329  159952 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 11:52:58.533497  159952 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 11:52:58.533637  159952 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 11:52:58.714544  159952 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 11:52:58.716857  159952 out.go:204]   - Generating certificates and keys ...
	I0617 11:52:58.716959  159952 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 11:52:58.717075  159952 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 11:52:58.717192  159952 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 11:52:58.717293  159952 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 11:52:58.717411  159952 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 11:52:58.717490  159952 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 11:52:58.717662  159952 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 11:52:58.718146  159952 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 11:52:58.718488  159952 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 11:52:58.718766  159952 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 11:52:58.718867  159952 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 11:52:58.718949  159952 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 11:52:58.992018  159952 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 11:52:59.249486  159952 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 11:52:59.315998  159952 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 11:52:59.403240  159952 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 11:52:59.420567  159952 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 11:52:59.420858  159952 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 11:52:59.420924  159952 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 11:52:59.561497  159952 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 11:52:59.563398  159952 out.go:204]   - Booting up control plane ...
	I0617 11:52:59.563533  159952 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 11:52:59.570042  159952 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 11:52:59.571898  159952 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 11:52:59.572834  159952 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 11:52:59.575070  159952 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 11:53:39.577313  159952 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0617 11:53:39.577836  159952 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:53:39.578057  159952 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:53:44.578806  159952 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:53:44.579136  159952 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:53:54.579867  159952 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:53:54.580150  159952 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:54:14.579328  159952 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:54:14.579578  159952 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:54:54.579420  159952 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:54:54.579717  159952 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:54:54.579747  159952 kubeadm.go:309] 
	I0617 11:54:54.579803  159952 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0617 11:54:54.579857  159952 kubeadm.go:309] 		timed out waiting for the condition
	I0617 11:54:54.579871  159952 kubeadm.go:309] 
	I0617 11:54:54.579915  159952 kubeadm.go:309] 	This error is likely caused by:
	I0617 11:54:54.579966  159952 kubeadm.go:309] 		- The kubelet is not running
	I0617 11:54:54.580098  159952 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0617 11:54:54.580106  159952 kubeadm.go:309] 
	I0617 11:54:54.580248  159952 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0617 11:54:54.580311  159952 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0617 11:54:54.580367  159952 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0617 11:54:54.580376  159952 kubeadm.go:309] 
	I0617 11:54:54.580483  159952 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0617 11:54:54.580581  159952 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0617 11:54:54.580590  159952 kubeadm.go:309] 
	I0617 11:54:54.580703  159952 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0617 11:54:54.580811  159952 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0617 11:54:54.580905  159952 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0617 11:54:54.580997  159952 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0617 11:54:54.581013  159952 kubeadm.go:309] 
	I0617 11:54:54.581781  159952 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 11:54:54.581896  159952 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0617 11:54:54.581999  159952 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0617 11:54:54.582100  159952 kubeadm.go:393] duration metric: took 3m56.062644014s to StartCluster
	I0617 11:54:54.582157  159952 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 11:54:54.582216  159952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 11:54:54.625492  159952 cri.go:89] found id: ""
	I0617 11:54:54.625522  159952 logs.go:276] 0 containers: []
	W0617 11:54:54.625531  159952 logs.go:278] No container was found matching "kube-apiserver"
	I0617 11:54:54.625537  159952 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 11:54:54.625602  159952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 11:54:54.659509  159952 cri.go:89] found id: ""
	I0617 11:54:54.659539  159952 logs.go:276] 0 containers: []
	W0617 11:54:54.659546  159952 logs.go:278] No container was found matching "etcd"
	I0617 11:54:54.659551  159952 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 11:54:54.659612  159952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 11:54:54.709707  159952 cri.go:89] found id: ""
	I0617 11:54:54.709741  159952 logs.go:276] 0 containers: []
	W0617 11:54:54.709758  159952 logs.go:278] No container was found matching "coredns"
	I0617 11:54:54.709767  159952 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 11:54:54.709823  159952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 11:54:54.745782  159952 cri.go:89] found id: ""
	I0617 11:54:54.745817  159952 logs.go:276] 0 containers: []
	W0617 11:54:54.745826  159952 logs.go:278] No container was found matching "kube-scheduler"
	I0617 11:54:54.745835  159952 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 11:54:54.745894  159952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 11:54:54.781025  159952 cri.go:89] found id: ""
	I0617 11:54:54.781055  159952 logs.go:276] 0 containers: []
	W0617 11:54:54.781069  159952 logs.go:278] No container was found matching "kube-proxy"
	I0617 11:54:54.781075  159952 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 11:54:54.781136  159952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 11:54:54.815363  159952 cri.go:89] found id: ""
	I0617 11:54:54.815394  159952 logs.go:276] 0 containers: []
	W0617 11:54:54.815404  159952 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 11:54:54.815410  159952 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 11:54:54.815478  159952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 11:54:54.848172  159952 cri.go:89] found id: ""
	I0617 11:54:54.848201  159952 logs.go:276] 0 containers: []
	W0617 11:54:54.848210  159952 logs.go:278] No container was found matching "kindnet"
	I0617 11:54:54.848220  159952 logs.go:123] Gathering logs for kubelet ...
	I0617 11:54:54.848235  159952 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 11:54:54.898833  159952 logs.go:123] Gathering logs for dmesg ...
	I0617 11:54:54.898871  159952 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 11:54:54.912548  159952 logs.go:123] Gathering logs for describe nodes ...
	I0617 11:54:54.912575  159952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 11:54:55.019722  159952 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 11:54:55.019749  159952 logs.go:123] Gathering logs for CRI-O ...
	I0617 11:54:55.019788  159952 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 11:54:55.118678  159952 logs.go:123] Gathering logs for container status ...
	I0617 11:54:55.118722  159952 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0617 11:54:55.158793  159952 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0617 11:54:55.158836  159952 out.go:239] * 
	* 
	W0617 11:54:55.158900  159952 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0617 11:54:55.158923  159952 out.go:239] * 
	* 
	W0617 11:54:55.159848  159952 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 11:54:55.162997  159952 out.go:177] 
	W0617 11:54:55.164150  159952 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0617 11:54:55.164232  159952 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0617 11:54:55.164251  159952 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0617 11:54:55.165852  159952 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-717156 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-717156
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-717156: (6.302601651s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-717156 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-717156 status --format={{.Host}}: exit status 7 (62.843499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-717156 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-717156 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.09947163s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-717156 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-717156 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-717156 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (89.793655ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-717156] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19084
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-717156
	    minikube start -p kubernetes-upgrade-717156 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7171562 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-717156 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-717156 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-717156 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (13.871842166s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-06-17 11:55:56.715139689 +0000 UTC m=+4302.558697838
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-717156 -n kubernetes-upgrade-717156
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-717156 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-717156 logs -n 25: (1.179442457s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | force-systemd-flag-855883 ssh cat                     | force-systemd-flag-855883 | jenkins | v1.33.1 | 17 Jun 24 11:50 UTC | 17 Jun 24 11:50 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-855883                          | force-systemd-flag-855883 | jenkins | v1.33.1 | 17 Jun 24 11:50 UTC | 17 Jun 24 11:50 UTC |
	| start   | -p stopped-upgrade-066761                             | minikube                  | jenkins | v1.26.0 | 17 Jun 24 11:50 UTC | 17 Jun 24 11:51 UTC |
	|         | --memory=2200 --vm-driver=kvm2                        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-846787 sudo                           | NoKubernetes-846787       | jenkins | v1.33.1 | 17 Jun 24 11:50 UTC |                     |
	|         | systemctl is-active --quiet                           |                           |         |         |                     |                     |
	|         | service kubelet                                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-846787                                | NoKubernetes-846787       | jenkins | v1.33.1 | 17 Jun 24 11:50 UTC | 17 Jun 24 11:50 UTC |
	| start   | -p cert-options-212761                                | cert-options-212761       | jenkins | v1.33.1 | 17 Jun 24 11:50 UTC | 17 Jun 24 11:51 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-066761 stop                           | minikube                  | jenkins | v1.26.0 | 17 Jun 24 11:51 UTC | 17 Jun 24 11:51 UTC |
	| start   | -p stopped-upgrade-066761                             | stopped-upgrade-066761    | jenkins | v1.33.1 | 17 Jun 24 11:51 UTC | 17 Jun 24 11:52 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-514753                             | cert-expiration-514753    | jenkins | v1.33.1 | 17 Jun 24 11:51 UTC | 17 Jun 24 11:52 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                               |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | cert-options-212761 ssh                               | cert-options-212761       | jenkins | v1.33.1 | 17 Jun 24 11:51 UTC | 17 Jun 24 11:51 UTC |
	|         | openssl x509 -text -noout -in                         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                           |         |         |                     |                     |
	| ssh     | -p cert-options-212761 -- sudo                        | cert-options-212761       | jenkins | v1.33.1 | 17 Jun 24 11:51 UTC | 17 Jun 24 11:51 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                           |         |         |                     |                     |
	| delete  | -p cert-options-212761                                | cert-options-212761       | jenkins | v1.33.1 | 17 Jun 24 11:51 UTC | 17 Jun 24 11:51 UTC |
	| start   | -p old-k8s-version-003661                             | old-k8s-version-003661    | jenkins | v1.33.1 | 17 Jun 24 11:51 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --kvm-network=default                                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |         |                     |                     |
	|         | --keep-context=false                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-066761                             | stopped-upgrade-066761    | jenkins | v1.33.1 | 17 Jun 24 11:52 UTC | 17 Jun 24 11:52 UTC |
	| start   | -p no-preload-152830                                  | no-preload-152830         | jenkins | v1.33.1 | 17 Jun 24 11:52 UTC | 17 Jun 24 11:53 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                          |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-514753                             | cert-expiration-514753    | jenkins | v1.33.1 | 17 Jun 24 11:52 UTC | 17 Jun 24 11:52 UTC |
	| start   | -p embed-certs-136195                                 | embed-certs-136195        | jenkins | v1.33.1 | 17 Jun 24 11:52 UTC | 17 Jun 24 11:54 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                           |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                          |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-152830            | no-preload-152830         | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p no-preload-152830                                  | no-preload-152830         | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-136195           | embed-certs-136195        | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p embed-certs-136195                                 | embed-certs-136195        | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-717156                          | kubernetes-upgrade-717156 | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:55 UTC |
	| start   | -p kubernetes-upgrade-717156                          | kubernetes-upgrade-717156 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-717156                          | kubernetes-upgrade-717156 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-717156                          | kubernetes-upgrade-717156 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 11:55:42
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 11:55:42.896945  163954 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:55:42.897110  163954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:55:42.897122  163954 out.go:304] Setting ErrFile to fd 2...
	I0617 11:55:42.897128  163954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:55:42.897436  163954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:55:42.898198  163954 out.go:298] Setting JSON to false
	I0617 11:55:42.899499  163954 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5890,"bootTime":1718619453,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:55:42.899582  163954 start.go:139] virtualization: kvm guest
	I0617 11:55:42.901842  163954 out.go:177] * [kubernetes-upgrade-717156] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:55:42.903301  163954 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:55:42.903373  163954 notify.go:220] Checking for updates...
	I0617 11:55:42.904574  163954 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:55:42.906022  163954 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:55:42.907346  163954 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:55:42.908571  163954 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:55:42.909783  163954 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:55:42.911702  163954 config.go:182] Loaded profile config "kubernetes-upgrade-717156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:55:42.912345  163954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:55:42.912433  163954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:55:42.927695  163954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I0617 11:55:42.928111  163954 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:55:42.928735  163954 main.go:141] libmachine: Using API Version  1
	I0617 11:55:42.928759  163954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:55:42.929155  163954 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:55:42.929396  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .DriverName
	I0617 11:55:42.929663  163954 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:55:42.930062  163954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:55:42.930107  163954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:55:42.946265  163954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33689
	I0617 11:55:42.946771  163954 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:55:42.947381  163954 main.go:141] libmachine: Using API Version  1
	I0617 11:55:42.947405  163954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:55:42.947762  163954 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:55:42.947969  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .DriverName
	I0617 11:55:42.986342  163954 out.go:177] * Using the kvm2 driver based on existing profile
	I0617 11:55:42.987751  163954 start.go:297] selected driver: kvm2
	I0617 11:55:42.987791  163954 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-717156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-717156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.236 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:55:42.987933  163954 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:55:42.988942  163954 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:55:42.989049  163954 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 11:55:43.005331  163954 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 11:55:43.005722  163954 cni.go:84] Creating CNI manager for ""
	I0617 11:55:43.005738  163954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:55:43.005774  163954 start.go:340] cluster config:
	{Name:kubernetes-upgrade-717156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-717156 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.236 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:55:43.005882  163954 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:55:43.008267  163954 out.go:177] * Starting "kubernetes-upgrade-717156" primary control-plane node in "kubernetes-upgrade-717156" cluster
	I0617 11:55:43.009558  163954 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:55:43.009599  163954 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 11:55:43.009610  163954 cache.go:56] Caching tarball of preloaded images
	I0617 11:55:43.009709  163954 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:55:43.009722  163954 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 11:55:43.009821  163954 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/config.json ...
	I0617 11:55:43.010031  163954 start.go:360] acquireMachinesLock for kubernetes-upgrade-717156: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:55:43.010084  163954 start.go:364] duration metric: took 29.072µs to acquireMachinesLock for "kubernetes-upgrade-717156"
	I0617 11:55:43.010103  163954 start.go:96] Skipping create...Using existing machine configuration
	I0617 11:55:43.010108  163954 fix.go:54] fixHost starting: 
	I0617 11:55:43.010466  163954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:55:43.010508  163954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:55:43.026128  163954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38825
	I0617 11:55:43.026629  163954 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:55:43.027258  163954 main.go:141] libmachine: Using API Version  1
	I0617 11:55:43.027282  163954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:55:43.027659  163954 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:55:43.027872  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .DriverName
	I0617 11:55:43.028057  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetState
	I0617 11:55:43.029678  163954 fix.go:112] recreateIfNeeded on kubernetes-upgrade-717156: state=Running err=<nil>
	W0617 11:55:43.029699  163954 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 11:55:43.031619  163954 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-717156" VM ...
	I0617 11:55:43.032888  163954 machine.go:94] provisionDockerMachine start ...
	I0617 11:55:43.032906  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .DriverName
	I0617 11:55:43.033104  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:55:43.035500  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:43.035985  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:55:43.036019  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:43.036137  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:55:43.036327  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:55:43.036518  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:55:43.036658  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:55:43.036829  163954 main.go:141] libmachine: Using SSH client type: native
	I0617 11:55:43.037063  163954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0617 11:55:43.037075  163954 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 11:55:43.164243  163954 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-717156
	
	I0617 11:55:43.164275  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetMachineName
	I0617 11:55:43.164535  163954 buildroot.go:166] provisioning hostname "kubernetes-upgrade-717156"
	I0617 11:55:43.164561  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetMachineName
	I0617 11:55:43.164780  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:55:43.167858  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:43.168221  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:55:43.168250  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:43.168399  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:55:43.168584  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:55:43.168737  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:55:43.168866  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:55:43.169005  163954 main.go:141] libmachine: Using SSH client type: native
	I0617 11:55:43.169168  163954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0617 11:55:43.169180  163954 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-717156 && echo "kubernetes-upgrade-717156" | sudo tee /etc/hostname
	I0617 11:55:43.312326  163954 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-717156
	
	I0617 11:55:43.312363  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:55:43.315541  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:43.315976  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:55:43.316008  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:43.316221  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:55:43.316449  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:55:43.316691  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:55:43.316866  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:55:43.317083  163954 main.go:141] libmachine: Using SSH client type: native
	I0617 11:55:43.317314  163954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0617 11:55:43.317341  163954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-717156' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-717156/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-717156' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 11:55:43.428366  163954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:55:43.428402  163954 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 11:55:43.428430  163954 buildroot.go:174] setting up certificates
	I0617 11:55:43.428441  163954 provision.go:84] configureAuth start
	I0617 11:55:43.428451  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetMachineName
	I0617 11:55:43.428742  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetIP
	I0617 11:55:43.431340  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:43.431693  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:55:43.431723  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:43.431872  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:55:43.434085  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:43.434387  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:55:43.434421  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:43.434508  163954 provision.go:143] copyHostCerts
	I0617 11:55:43.434592  163954 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 11:55:43.434605  163954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:55:43.434661  163954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 11:55:43.434749  163954 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 11:55:43.434757  163954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:55:43.434782  163954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 11:55:43.434850  163954 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 11:55:43.434857  163954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:55:43.434878  163954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 11:55:43.434931  163954 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-717156 san=[127.0.0.1 192.168.50.236 kubernetes-upgrade-717156 localhost minikube]
	I0617 11:55:43.618607  163954 provision.go:177] copyRemoteCerts
	I0617 11:55:43.618668  163954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 11:55:43.618692  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:55:43.621143  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:43.621454  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:55:43.621483  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:43.621670  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:55:43.621885  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:55:43.622058  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:55:43.622246  163954 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/kubernetes-upgrade-717156/id_rsa Username:docker}
	I0617 11:55:43.709709  163954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 11:55:43.737160  163954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0617 11:55:43.763579  163954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 11:55:43.789879  163954 provision.go:87] duration metric: took 361.423872ms to configureAuth
	I0617 11:55:43.789912  163954 buildroot.go:189] setting minikube options for container-runtime
	I0617 11:55:43.790140  163954 config.go:182] Loaded profile config "kubernetes-upgrade-717156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:55:43.790251  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:55:43.793009  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:43.793315  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:55:43.793339  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:43.793505  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:55:43.793714  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:55:43.793880  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:55:43.794019  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:55:43.794195  163954 main.go:141] libmachine: Using SSH client type: native
	I0617 11:55:43.794414  163954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0617 11:55:43.794434  163954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 11:55:44.678571  163954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 11:55:44.678600  163954 machine.go:97] duration metric: took 1.645700152s to provisionDockerMachine
	I0617 11:55:44.678610  163954 start.go:293] postStartSetup for "kubernetes-upgrade-717156" (driver="kvm2")
	I0617 11:55:44.678621  163954 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 11:55:44.678641  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .DriverName
	I0617 11:55:44.678970  163954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 11:55:44.679005  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:55:44.681312  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:44.681558  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:55:44.681593  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:44.681735  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:55:44.681939  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:55:44.682152  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:55:44.682296  163954 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/kubernetes-upgrade-717156/id_rsa Username:docker}
	I0617 11:55:44.765576  163954 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 11:55:44.770111  163954 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 11:55:44.770136  163954 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 11:55:44.770191  163954 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 11:55:44.770269  163954 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 11:55:44.770358  163954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 11:55:44.779679  163954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:55:44.803961  163954 start.go:296] duration metric: took 125.338449ms for postStartSetup
	I0617 11:55:44.803997  163954 fix.go:56] duration metric: took 1.793887304s for fixHost
	I0617 11:55:44.804022  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:55:44.806347  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:44.806691  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:55:44.806718  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:44.806866  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:55:44.807093  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:55:44.807235  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:55:44.807380  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:55:44.807509  163954 main.go:141] libmachine: Using SSH client type: native
	I0617 11:55:44.807699  163954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0617 11:55:44.807714  163954 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 11:55:44.930461  163954 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625344.912395779
	
	I0617 11:55:44.930490  163954 fix.go:216] guest clock: 1718625344.912395779
	I0617 11:55:44.930500  163954 fix.go:229] Guest: 2024-06-17 11:55:44.912395779 +0000 UTC Remote: 2024-06-17 11:55:44.804001613 +0000 UTC m=+1.956754167 (delta=108.394166ms)
	I0617 11:55:44.930556  163954 fix.go:200] guest clock delta is within tolerance: 108.394166ms
	I0617 11:55:44.930566  163954 start.go:83] releasing machines lock for "kubernetes-upgrade-717156", held for 1.920469778s
	I0617 11:55:44.930597  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .DriverName
	I0617 11:55:44.930881  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetIP
	I0617 11:55:44.933849  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:44.934332  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:55:44.934364  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:44.934512  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .DriverName
	I0617 11:55:44.934969  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .DriverName
	I0617 11:55:44.935119  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .DriverName
	I0617 11:55:44.935218  163954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 11:55:44.935268  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:55:44.935332  163954 ssh_runner.go:195] Run: cat /version.json
	I0617 11:55:44.935355  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHHostname
	I0617 11:55:44.937782  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:44.938033  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:44.938172  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:55:44.938198  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:44.938350  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:55:44.938353  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:55:44.938373  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:44.938540  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:55:44.938547  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHPort
	I0617 11:55:44.938700  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHKeyPath
	I0617 11:55:44.938717  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:55:44.938836  163954 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/kubernetes-upgrade-717156/id_rsa Username:docker}
	I0617 11:55:44.938861  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetSSHUsername
	I0617 11:55:44.939022  163954 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/kubernetes-upgrade-717156/id_rsa Username:docker}
	I0617 11:55:45.133763  163954 ssh_runner.go:195] Run: systemctl --version
	I0617 11:55:45.140575  163954 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 11:55:45.490166  163954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 11:55:45.503578  163954 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 11:55:45.503668  163954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 11:55:45.544829  163954 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0617 11:55:45.544859  163954 start.go:494] detecting cgroup driver to use...
	I0617 11:55:45.544933  163954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 11:55:45.593492  163954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 11:55:45.623720  163954 docker.go:217] disabling cri-docker service (if available) ...
	I0617 11:55:45.623812  163954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 11:55:45.642037  163954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 11:55:45.659711  163954 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 11:55:45.863125  163954 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 11:55:46.025081  163954 docker.go:233] disabling docker service ...
	I0617 11:55:46.025179  163954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 11:55:46.045227  163954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 11:55:46.060301  163954 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 11:55:46.246421  163954 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 11:55:46.426278  163954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 11:55:46.443171  163954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 11:55:46.470174  163954 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 11:55:46.470256  163954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:55:46.483592  163954 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 11:55:46.483659  163954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:55:46.495844  163954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:55:46.507318  163954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:55:46.519489  163954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 11:55:46.536316  163954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:55:46.552219  163954 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:55:46.569726  163954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:55:46.588063  163954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 11:55:46.600774  163954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 11:55:46.613021  163954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:55:46.800118  163954 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 11:55:47.158860  163954 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 11:55:47.158936  163954 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 11:55:47.174368  163954 start.go:562] Will wait 60s for crictl version
	I0617 11:55:47.174427  163954 ssh_runner.go:195] Run: which crictl
	I0617 11:55:47.182156  163954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 11:55:47.299523  163954 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 11:55:47.299774  163954 ssh_runner.go:195] Run: crio --version
	I0617 11:55:47.432995  163954 ssh_runner.go:195] Run: crio --version
	I0617 11:55:47.497376  163954 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 11:55:47.498691  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) Calling .GetIP
	I0617 11:55:47.501677  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:47.502046  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c6:52", ip: ""} in network mk-kubernetes-upgrade-717156: {Iface:virbr1 ExpiryTime:2024-06-17 12:50:39 +0000 UTC Type:0 Mac:52:54:00:0c:c6:52 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:kubernetes-upgrade-717156 Clientid:01:52:54:00:0c:c6:52}
	I0617 11:55:47.502079  163954 main.go:141] libmachine: (kubernetes-upgrade-717156) DBG | domain kubernetes-upgrade-717156 has defined IP address 192.168.50.236 and MAC address 52:54:00:0c:c6:52 in network mk-kubernetes-upgrade-717156
	I0617 11:55:47.502274  163954 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0617 11:55:47.516858  163954 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-717156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:kubernetes-upgrade-717156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.236 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 11:55:47.516999  163954 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:55:47.517057  163954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:55:47.558453  163954 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 11:55:47.558473  163954 crio.go:433] Images already preloaded, skipping extraction
	I0617 11:55:47.558520  163954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:55:47.590674  163954 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 11:55:47.590697  163954 cache_images.go:84] Images are preloaded, skipping loading
	I0617 11:55:47.590704  163954 kubeadm.go:928] updating node { 192.168.50.236 8443 v1.30.1 crio true true} ...
	I0617 11:55:47.590814  163954 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-717156 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-717156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 11:55:47.590884  163954 ssh_runner.go:195] Run: crio config
	I0617 11:55:47.632842  163954 cni.go:84] Creating CNI manager for ""
	I0617 11:55:47.632864  163954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:55:47.632876  163954 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 11:55:47.632898  163954 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.236 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-717156 NodeName:kubernetes-upgrade-717156 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 11:55:47.633016  163954 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.236
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-717156"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.236
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.236"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 11:55:47.633072  163954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 11:55:47.643007  163954 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 11:55:47.643086  163954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 11:55:47.652406  163954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0617 11:55:47.668000  163954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 11:55:47.684340  163954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0617 11:55:47.700449  163954 ssh_runner.go:195] Run: grep 192.168.50.236	control-plane.minikube.internal$ /etc/hosts
	I0617 11:55:47.705138  163954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:55:47.853217  163954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:55:47.868377  163954 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156 for IP: 192.168.50.236
	I0617 11:55:47.868403  163954 certs.go:194] generating shared ca certs ...
	I0617 11:55:47.868422  163954 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:55:47.868584  163954 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 11:55:47.868622  163954 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 11:55:47.868631  163954 certs.go:256] generating profile certs ...
	I0617 11:55:47.868704  163954 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/client.key
	I0617 11:55:47.868757  163954 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/apiserver.key.f24592da
	I0617 11:55:47.868798  163954 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/proxy-client.key
	I0617 11:55:47.868898  163954 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 11:55:47.868925  163954 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 11:55:47.868935  163954 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 11:55:47.868955  163954 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 11:55:47.868981  163954 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 11:55:47.869003  163954 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 11:55:47.869040  163954 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:55:47.869673  163954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 11:55:47.896104  163954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 11:55:47.921596  163954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 11:55:47.947477  163954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 11:55:47.972643  163954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0617 11:55:47.998493  163954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0617 11:55:48.029457  163954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 11:55:48.054208  163954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/kubernetes-upgrade-717156/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 11:55:48.078756  163954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 11:55:48.102879  163954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 11:55:48.127627  163954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 11:55:48.153615  163954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 11:55:48.173531  163954 ssh_runner.go:195] Run: openssl version
	I0617 11:55:48.180065  163954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 11:55:48.191950  163954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 11:55:48.196350  163954 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 11:55:48.196409  163954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 11:55:48.202395  163954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 11:55:48.212140  163954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 11:55:48.237618  163954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:55:48.249779  163954 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:55:48.249847  163954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:55:48.255954  163954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 11:55:48.266154  163954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 11:55:48.278550  163954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 11:55:48.282952  163954 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 11:55:48.282997  163954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 11:55:48.288459  163954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 11:55:48.298636  163954 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 11:55:48.303267  163954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 11:55:48.308983  163954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 11:55:48.314500  163954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 11:55:48.319964  163954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 11:55:48.325349  163954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 11:55:48.330740  163954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 11:55:48.336104  163954 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-717156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.1 ClusterName:kubernetes-upgrade-717156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.236 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:55:48.336188  163954 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 11:55:48.336221  163954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 11:55:48.376232  163954 cri.go:89] found id: "2193ff27c0c6a22099fd14a498d3b31d580ad03227a07f5597e6afce87eec5f0"
	I0617 11:55:48.376258  163954 cri.go:89] found id: "bcd777136f2b5925d7e9e84d688664a89943e8acd7e95e7644411fc7d2fcd989"
	I0617 11:55:48.376262  163954 cri.go:89] found id: "4cbd627cfdf11348a1b66dc236a7f39d02c5c5d9cfb4e763e87c7e037f5f19ea"
	I0617 11:55:48.376265  163954 cri.go:89] found id: "f4a9a4a425aec782fb37181c48421a914f923c64f241ff9c7445b88ae92d1480"
	I0617 11:55:48.376268  163954 cri.go:89] found id: ""
	I0617 11:55:48.376308  163954 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.363973554Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718625357363953523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe9a56bf-0561-4979-a78d-063481ecb7ad name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.364524659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8a07c23-57f6-4045-910d-c89665f0bfc3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.364591730Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e8a07c23-57f6-4045-910d-c89665f0bfc3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.364859580Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f13357e9e493d4370016796c9d66d03a9aca2c504afef26deb11dd320cd7a8c9,PodSandboxId:4d497b44158c99768605e7274163f89725953ce6290052534bf61f3f9f709a1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625350226301441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7d5cd9a3e3bb686bd84449c77d45d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3ba545e7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45755fb6b32ce2681722e0ed29c087b4eec6063f2497a6b281501935f331b12f,PodSandboxId:7e1757637578f133b8ee4097445026f0c8996b1ddad3aa57d9f0df578362d0fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625350249677082,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 043b7d34a45c3a903d70e24da5de6728,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84b02fe00239df5ec532f570d1a53de4c6d1916d19708f1d1fad5a39b9e95486,PodSandboxId:db4b98065c06cdec2da6adbcf3bb3644f29b9c9945b4ae11682087d76d366ef6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625350244088026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72448844f3e0ecd480e1f8bb5cd890,},Annotations:map[string]string{io.kubernetes.container.hash: 288a69c,io.kubernetes.container.restartCount: 2,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4c725bd62ff4532a663b95aebf83a21b0da0aec439883cd876af09a195b4f07,PodSandboxId:2356fcecb14384bd9251d3cc031f7b8d74ed308b93192ec1c5a0ca0f9b836af2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625350239892995,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 209bc8d070ff0daa38bf352094c261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd777136f2b5925d7e9e84d688664a89943e8acd7e95e7644411fc7d2fcd989,PodSandboxId:973306ce2f7214f15c61f465dd35287027e8da7d03ac4b6be3f33e4432c0753f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718625345274104506,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 209bc8d070ff0daa38bf352094c261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2193ff27c0c6a22099fd14a498d3b31d580ad03227a07f5597e6afce87eec5f0,PodSandboxId:dfec1b32129205da0718dd92832e467cf4a84193ab1f00ac94112e40fcc353d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718625345317497729,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7d5cd9a3e3bb686bd84449c77d45d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3ba545e7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a9a4a425aec782fb37181c48421a914f923c64f241ff9c7445b88ae92d1480,PodSandboxId:7cb0926a100cc0655b97f4c5d05676d7556a1d364ac4c7b6a9ac7cf14dc65bc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718625345193758552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72448844f3e0ecd480e1f8bb5cd890,},Annotations:map[string]string{io.kubernetes.container.hash: 288a69c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cbd627cfdf11348a1b66dc236a7f39d02c5c5d9cfb4e763e87c7e037f5f19ea,PodSandboxId:1ce571bfdae9a545305462dcf8800f07816683b2a58cf3784c8a057bae317187,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718625345239923848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 043b7d34a45c3a903d70e24da5de6728,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e8a07c23-57f6-4045-910d-c89665f0bfc3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.400790287Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f830564-7ed1-4bf7-bfc7-1c294ad965f0 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.400868735Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f830564-7ed1-4bf7-bfc7-1c294ad965f0 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.402191201Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8ab4dc1-6867-4414-80cc-b0af261afa17 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.402605018Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718625357402582548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8ab4dc1-6867-4414-80cc-b0af261afa17 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.403172273Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97722697-ba4a-4172-8856-4c8168025221 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.403237586Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97722697-ba4a-4172-8856-4c8168025221 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.403445041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f13357e9e493d4370016796c9d66d03a9aca2c504afef26deb11dd320cd7a8c9,PodSandboxId:4d497b44158c99768605e7274163f89725953ce6290052534bf61f3f9f709a1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625350226301441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7d5cd9a3e3bb686bd84449c77d45d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3ba545e7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45755fb6b32ce2681722e0ed29c087b4eec6063f2497a6b281501935f331b12f,PodSandboxId:7e1757637578f133b8ee4097445026f0c8996b1ddad3aa57d9f0df578362d0fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625350249677082,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 043b7d34a45c3a903d70e24da5de6728,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84b02fe00239df5ec532f570d1a53de4c6d1916d19708f1d1fad5a39b9e95486,PodSandboxId:db4b98065c06cdec2da6adbcf3bb3644f29b9c9945b4ae11682087d76d366ef6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625350244088026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72448844f3e0ecd480e1f8bb5cd890,},Annotations:map[string]string{io.kubernetes.container.hash: 288a69c,io.kubernetes.container.restartCount: 2,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4c725bd62ff4532a663b95aebf83a21b0da0aec439883cd876af09a195b4f07,PodSandboxId:2356fcecb14384bd9251d3cc031f7b8d74ed308b93192ec1c5a0ca0f9b836af2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625350239892995,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 209bc8d070ff0daa38bf352094c261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd777136f2b5925d7e9e84d688664a89943e8acd7e95e7644411fc7d2fcd989,PodSandboxId:973306ce2f7214f15c61f465dd35287027e8da7d03ac4b6be3f33e4432c0753f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718625345274104506,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 209bc8d070ff0daa38bf352094c261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2193ff27c0c6a22099fd14a498d3b31d580ad03227a07f5597e6afce87eec5f0,PodSandboxId:dfec1b32129205da0718dd92832e467cf4a84193ab1f00ac94112e40fcc353d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718625345317497729,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7d5cd9a3e3bb686bd84449c77d45d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3ba545e7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a9a4a425aec782fb37181c48421a914f923c64f241ff9c7445b88ae92d1480,PodSandboxId:7cb0926a100cc0655b97f4c5d05676d7556a1d364ac4c7b6a9ac7cf14dc65bc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718625345193758552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72448844f3e0ecd480e1f8bb5cd890,},Annotations:map[string]string{io.kubernetes.container.hash: 288a69c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cbd627cfdf11348a1b66dc236a7f39d02c5c5d9cfb4e763e87c7e037f5f19ea,PodSandboxId:1ce571bfdae9a545305462dcf8800f07816683b2a58cf3784c8a057bae317187,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718625345239923848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 043b7d34a45c3a903d70e24da5de6728,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97722697-ba4a-4172-8856-4c8168025221 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.448076976Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=706814c0-7f58-4d68-b333-34c6ae3a4c6a name=/runtime.v1.RuntimeService/Version
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.448165945Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=706814c0-7f58-4d68-b333-34c6ae3a4c6a name=/runtime.v1.RuntimeService/Version
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.450363262Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ea1ffdca-05d7-45c3-84c2-46fb12c5bb32 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.450820029Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718625357450797122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea1ffdca-05d7-45c3-84c2-46fb12c5bb32 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.451402599Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec531e1f-e1d8-4b82-b57e-c64387bcae83 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.451450309Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec531e1f-e1d8-4b82-b57e-c64387bcae83 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.452268979Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f13357e9e493d4370016796c9d66d03a9aca2c504afef26deb11dd320cd7a8c9,PodSandboxId:4d497b44158c99768605e7274163f89725953ce6290052534bf61f3f9f709a1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625350226301441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7d5cd9a3e3bb686bd84449c77d45d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3ba545e7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45755fb6b32ce2681722e0ed29c087b4eec6063f2497a6b281501935f331b12f,PodSandboxId:7e1757637578f133b8ee4097445026f0c8996b1ddad3aa57d9f0df578362d0fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625350249677082,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 043b7d34a45c3a903d70e24da5de6728,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84b02fe00239df5ec532f570d1a53de4c6d1916d19708f1d1fad5a39b9e95486,PodSandboxId:db4b98065c06cdec2da6adbcf3bb3644f29b9c9945b4ae11682087d76d366ef6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625350244088026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72448844f3e0ecd480e1f8bb5cd890,},Annotations:map[string]string{io.kubernetes.container.hash: 288a69c,io.kubernetes.container.restartCount: 2,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4c725bd62ff4532a663b95aebf83a21b0da0aec439883cd876af09a195b4f07,PodSandboxId:2356fcecb14384bd9251d3cc031f7b8d74ed308b93192ec1c5a0ca0f9b836af2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625350239892995,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 209bc8d070ff0daa38bf352094c261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd777136f2b5925d7e9e84d688664a89943e8acd7e95e7644411fc7d2fcd989,PodSandboxId:973306ce2f7214f15c61f465dd35287027e8da7d03ac4b6be3f33e4432c0753f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718625345274104506,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 209bc8d070ff0daa38bf352094c261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2193ff27c0c6a22099fd14a498d3b31d580ad03227a07f5597e6afce87eec5f0,PodSandboxId:dfec1b32129205da0718dd92832e467cf4a84193ab1f00ac94112e40fcc353d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718625345317497729,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7d5cd9a3e3bb686bd84449c77d45d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3ba545e7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a9a4a425aec782fb37181c48421a914f923c64f241ff9c7445b88ae92d1480,PodSandboxId:7cb0926a100cc0655b97f4c5d05676d7556a1d364ac4c7b6a9ac7cf14dc65bc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718625345193758552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72448844f3e0ecd480e1f8bb5cd890,},Annotations:map[string]string{io.kubernetes.container.hash: 288a69c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cbd627cfdf11348a1b66dc236a7f39d02c5c5d9cfb4e763e87c7e037f5f19ea,PodSandboxId:1ce571bfdae9a545305462dcf8800f07816683b2a58cf3784c8a057bae317187,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718625345239923848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 043b7d34a45c3a903d70e24da5de6728,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec531e1f-e1d8-4b82-b57e-c64387bcae83 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.490205205Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a575ba9f-733b-4333-8aec-589c736266bc name=/runtime.v1.RuntimeService/Version
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.490301453Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a575ba9f-733b-4333-8aec-589c736266bc name=/runtime.v1.RuntimeService/Version
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.492057201Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c970bba1-528c-4cc9-ae7a-db97a5b877da name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.492412834Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718625357492392145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c970bba1-528c-4cc9-ae7a-db97a5b877da name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.493477053Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=484bf7dd-3c88-48da-9d6d-ac11da1f0309 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.493528139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=484bf7dd-3c88-48da-9d6d-ac11da1f0309 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:55:57 kubernetes-upgrade-717156 crio[1877]: time="2024-06-17 11:55:57.493785926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f13357e9e493d4370016796c9d66d03a9aca2c504afef26deb11dd320cd7a8c9,PodSandboxId:4d497b44158c99768605e7274163f89725953ce6290052534bf61f3f9f709a1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625350226301441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7d5cd9a3e3bb686bd84449c77d45d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3ba545e7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45755fb6b32ce2681722e0ed29c087b4eec6063f2497a6b281501935f331b12f,PodSandboxId:7e1757637578f133b8ee4097445026f0c8996b1ddad3aa57d9f0df578362d0fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625350249677082,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 043b7d34a45c3a903d70e24da5de6728,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84b02fe00239df5ec532f570d1a53de4c6d1916d19708f1d1fad5a39b9e95486,PodSandboxId:db4b98065c06cdec2da6adbcf3bb3644f29b9c9945b4ae11682087d76d366ef6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625350244088026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72448844f3e0ecd480e1f8bb5cd890,},Annotations:map[string]string{io.kubernetes.container.hash: 288a69c,io.kubernetes.container.restartCount: 2,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4c725bd62ff4532a663b95aebf83a21b0da0aec439883cd876af09a195b4f07,PodSandboxId:2356fcecb14384bd9251d3cc031f7b8d74ed308b93192ec1c5a0ca0f9b836af2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625350239892995,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 209bc8d070ff0daa38bf352094c261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd777136f2b5925d7e9e84d688664a89943e8acd7e95e7644411fc7d2fcd989,PodSandboxId:973306ce2f7214f15c61f465dd35287027e8da7d03ac4b6be3f33e4432c0753f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718625345274104506,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 209bc8d070ff0daa38bf352094c261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2193ff27c0c6a22099fd14a498d3b31d580ad03227a07f5597e6afce87eec5f0,PodSandboxId:dfec1b32129205da0718dd92832e467cf4a84193ab1f00ac94112e40fcc353d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718625345317497729,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7d5cd9a3e3bb686bd84449c77d45d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3ba545e7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a9a4a425aec782fb37181c48421a914f923c64f241ff9c7445b88ae92d1480,PodSandboxId:7cb0926a100cc0655b97f4c5d05676d7556a1d364ac4c7b6a9ac7cf14dc65bc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718625345193758552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72448844f3e0ecd480e1f8bb5cd890,},Annotations:map[string]string{io.kubernetes.container.hash: 288a69c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cbd627cfdf11348a1b66dc236a7f39d02c5c5d9cfb4e763e87c7e037f5f19ea,PodSandboxId:1ce571bfdae9a545305462dcf8800f07816683b2a58cf3784c8a057bae317187,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718625345239923848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-717156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 043b7d34a45c3a903d70e24da5de6728,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=484bf7dd-3c88-48da-9d6d-ac11da1f0309 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	45755fb6b32ce       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   7 seconds ago       Running             kube-controller-manager   2                   7e1757637578f       kube-controller-manager-kubernetes-upgrade-717156
	84b02fe00239d       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   7 seconds ago       Running             kube-apiserver            2                   db4b98065c06c       kube-apiserver-kubernetes-upgrade-717156
	e4c725bd62ff4       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   7 seconds ago       Running             kube-scheduler            2                   2356fcecb1438       kube-scheduler-kubernetes-upgrade-717156
	f13357e9e493d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   7 seconds ago       Running             etcd                      2                   4d497b44158c9       etcd-kubernetes-upgrade-717156
	2193ff27c0c6a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   12 seconds ago      Exited              etcd                      1                   dfec1b3212920       etcd-kubernetes-upgrade-717156
	bcd777136f2b5       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   12 seconds ago      Exited              kube-scheduler            1                   973306ce2f721       kube-scheduler-kubernetes-upgrade-717156
	4cbd627cfdf11       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   12 seconds ago      Exited              kube-controller-manager   1                   1ce571bfdae9a       kube-controller-manager-kubernetes-upgrade-717156
	f4a9a4a425aec       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   12 seconds ago      Exited              kube-apiserver            1                   7cb0926a100cc       kube-apiserver-kubernetes-upgrade-717156
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-717156
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-717156
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:55:34 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-717156
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:55:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:55:54 +0000   Mon, 17 Jun 2024 11:55:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:55:54 +0000   Mon, 17 Jun 2024 11:55:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:55:54 +0000   Mon, 17 Jun 2024 11:55:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:55:54 +0000   Mon, 17 Jun 2024 11:55:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.236
	  Hostname:    kubernetes-upgrade-717156
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c31fb09fdff4b019e16d0a4efd4d5d7
	  System UUID:                4c31fb09-fdff-4b01-9e16-d0a4efd4d5d7
	  Boot ID:                    1ac17ef0-a3e9-4b22-abf4-f1fc86a574d8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 kube-apiserver-kubernetes-upgrade-717156             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-717156    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  kube-system                 kube-scheduler-kubernetes-upgrade-717156             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                550m (27%!)(MISSING)  0 (0%!)(MISSING)
	  memory             0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 28s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 28s)  kubelet  Node kubernetes-upgrade-717156 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 28s)  kubelet  Node kubernetes-upgrade-717156 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 28s)  kubelet  Node kubernetes-upgrade-717156 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 8s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet  Node kubernetes-upgrade-717156 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet  Node kubernetes-upgrade-717156 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet  Node kubernetes-upgrade-717156 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +2.459028] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.590782] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.593553] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.061782] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066265] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.170486] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.143111] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.270154] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +4.167929] systemd-fstab-generator[738]: Ignoring "noauto" option for root device
	[  +2.373778] systemd-fstab-generator[860]: Ignoring "noauto" option for root device
	[  +0.059692] kauditd_printk_skb: 158 callbacks suppressed
	[ +12.200077] systemd-fstab-generator[1253]: Ignoring "noauto" option for root device
	[  +0.071455] kauditd_printk_skb: 69 callbacks suppressed
	[  +3.858089] systemd-fstab-generator[1790]: Ignoring "noauto" option for root device
	[  +0.182507] systemd-fstab-generator[1806]: Ignoring "noauto" option for root device
	[  +0.206162] systemd-fstab-generator[1823]: Ignoring "noauto" option for root device
	[  +0.177428] systemd-fstab-generator[1835]: Ignoring "noauto" option for root device
	[  +0.386156] systemd-fstab-generator[1863]: Ignoring "noauto" option for root device
	[  +0.514704] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.548780] systemd-fstab-generator[2197]: Ignoring "noauto" option for root device
	[  +1.787102] systemd-fstab-generator[2324]: Ignoring "noauto" option for root device
	[  +6.202737] systemd-fstab-generator[2584]: Ignoring "noauto" option for root device
	[  +0.073252] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [2193ff27c0c6a22099fd14a498d3b31d580ad03227a07f5597e6afce87eec5f0] <==
	{"level":"info","ts":"2024-06-17T11:55:45.672908Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"10.364417ms"}
	{"level":"info","ts":"2024-06-17T11:55:45.678789Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-06-17T11:55:45.690237Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"2f771fd227fc0b9","local-member-id":"710818061415868e","commit-index":293}
	{"level":"info","ts":"2024-06-17T11:55:45.690361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"710818061415868e switched to configuration voters=()"}
	{"level":"info","ts":"2024-06-17T11:55:45.690417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"710818061415868e became follower at term 2"}
	{"level":"info","ts":"2024-06-17T11:55:45.690445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 710818061415868e [peers: [], term: 2, commit: 293, applied: 0, lastindex: 293, lastterm: 2]"}
	{"level":"warn","ts":"2024-06-17T11:55:45.699831Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-06-17T11:55:45.729065Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":286}
	{"level":"info","ts":"2024-06-17T11:55:45.7408Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-06-17T11:55:45.749353Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"710818061415868e","timeout":"7s"}
	{"level":"info","ts":"2024-06-17T11:55:45.752838Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"710818061415868e"}
	{"level":"info","ts":"2024-06-17T11:55:45.753021Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"710818061415868e","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-06-17T11:55:45.756996Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-06-17T11:55:45.757203Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-17T11:55:45.757246Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-17T11:55:45.75727Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-17T11:55:45.757518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"710818061415868e switched to configuration voters=(8144786340485367438)"}
	{"level":"info","ts":"2024-06-17T11:55:45.757588Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2f771fd227fc0b9","local-member-id":"710818061415868e","added-peer-id":"710818061415868e","added-peer-peer-urls":["https://192.168.50.236:2380"]}
	{"level":"info","ts":"2024-06-17T11:55:45.757765Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2f771fd227fc0b9","local-member-id":"710818061415868e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:55:45.757807Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:55:45.794749Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.236:2380"}
	{"level":"info","ts":"2024-06-17T11:55:45.794817Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.236:2380"}
	{"level":"info","ts":"2024-06-17T11:55:45.79266Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-17T11:55:45.79629Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"710818061415868e","initial-advertise-peer-urls":["https://192.168.50.236:2380"],"listen-peer-urls":["https://192.168.50.236:2380"],"advertise-client-urls":["https://192.168.50.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-17T11:55:45.796387Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [f13357e9e493d4370016796c9d66d03a9aca2c504afef26deb11dd320cd7a8c9] <==
	{"level":"info","ts":"2024-06-17T11:55:50.670966Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"710818061415868e","initial-advertise-peer-urls":["https://192.168.50.236:2380"],"listen-peer-urls":["https://192.168.50.236:2380"],"advertise-client-urls":["https://192.168.50.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-17T11:55:50.672767Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-17T11:55:50.670247Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.236:2380"}
	{"level":"info","ts":"2024-06-17T11:55:50.672879Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.236:2380"}
	{"level":"info","ts":"2024-06-17T11:55:50.670382Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-17T11:55:50.672929Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-17T11:55:50.672956Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-17T11:55:50.670633Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"710818061415868e switched to configuration voters=(8144786340485367438)"}
	{"level":"info","ts":"2024-06-17T11:55:50.675828Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2f771fd227fc0b9","local-member-id":"710818061415868e","added-peer-id":"710818061415868e","added-peer-peer-urls":["https://192.168.50.236:2380"]}
	{"level":"info","ts":"2024-06-17T11:55:50.675991Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2f771fd227fc0b9","local-member-id":"710818061415868e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:55:50.676039Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:55:52.514571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"710818061415868e is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-17T11:55:52.514692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"710818061415868e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-17T11:55:52.514821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"710818061415868e received MsgPreVoteResp from 710818061415868e at term 2"}
	{"level":"info","ts":"2024-06-17T11:55:52.514859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"710818061415868e became candidate at term 3"}
	{"level":"info","ts":"2024-06-17T11:55:52.514883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"710818061415868e received MsgVoteResp from 710818061415868e at term 3"}
	{"level":"info","ts":"2024-06-17T11:55:52.51491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"710818061415868e became leader at term 3"}
	{"level":"info","ts":"2024-06-17T11:55:52.514934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 710818061415868e elected leader 710818061415868e at term 3"}
	{"level":"info","ts":"2024-06-17T11:55:52.520033Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"710818061415868e","local-member-attributes":"{Name:kubernetes-upgrade-717156 ClientURLs:[https://192.168.50.236:2379]}","request-path":"/0/members/710818061415868e/attributes","cluster-id":"2f771fd227fc0b9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-17T11:55:52.520049Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T11:55:52.520268Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-17T11:55:52.520301Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-17T11:55:52.520068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T11:55:52.522164Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.236:2379"}
	{"level":"info","ts":"2024-06-17T11:55:52.522282Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:55:57 up 0 min,  0 users,  load average: 1.37, 0.33, 0.11
	Linux kubernetes-upgrade-717156 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [84b02fe00239df5ec532f570d1a53de4c6d1916d19708f1d1fad5a39b9e95486] <==
	I0617 11:55:53.806154       1 controller.go:78] Starting OpenAPI AggregationController
	I0617 11:55:53.850627       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0617 11:55:53.850777       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0617 11:55:53.906199       1 shared_informer.go:320] Caches are synced for configmaps
	I0617 11:55:53.951694       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0617 11:55:53.951867       1 aggregator.go:165] initial CRD sync complete...
	I0617 11:55:53.951895       1 autoregister_controller.go:141] Starting autoregister controller
	I0617 11:55:53.951917       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0617 11:55:53.951940       1 cache.go:39] Caches are synced for autoregister controller
	I0617 11:55:53.990410       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0617 11:55:53.995833       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0617 11:55:53.995881       1 policy_source.go:224] refreshing policies
	I0617 11:55:53.997078       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0617 11:55:53.997258       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0617 11:55:53.997292       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0617 11:55:53.997381       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0617 11:55:53.998110       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0617 11:55:53.998487       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0617 11:55:54.003435       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0617 11:55:54.809595       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0617 11:55:55.441078       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0617 11:55:55.450608       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0617 11:55:55.482815       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0617 11:55:55.623509       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0617 11:55:55.630184       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [f4a9a4a425aec782fb37181c48421a914f923c64f241ff9c7445b88ae92d1480] <==
	I0617 11:55:45.713991       1 options.go:221] external host was not specified, using 192.168.50.236
	I0617 11:55:45.733695       1 server.go:148] Version: v1.30.1
	I0617 11:55:45.733890       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [45755fb6b32ce2681722e0ed29c087b4eec6063f2497a6b281501935f331b12f] <==
	I0617 11:55:57.333308       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0617 11:55:57.333393       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0617 11:55:57.333401       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0617 11:55:57.383026       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0617 11:55:57.383104       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0617 11:55:57.534231       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0617 11:55:57.534248       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0617 11:55:57.534264       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0617 11:55:57.534306       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0617 11:55:57.534315       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0617 11:55:57.687105       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0617 11:55:57.687294       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0617 11:55:57.687385       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0617 11:55:57.733556       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0617 11:55:57.733581       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0617 11:55:57.733592       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0617 11:55:57.733601       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0617 11:55:57.733799       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0617 11:55:57.733813       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0617 11:55:57.883822       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0617 11:55:57.883952       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0617 11:55:57.883962       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0617 11:55:57.932402       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0617 11:55:57.932924       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0617 11:55:57.932957       1 shared_informer.go:313] Waiting for caches to sync for GC
	
	
	==> kube-controller-manager [4cbd627cfdf11348a1b66dc236a7f39d02c5c5d9cfb4e763e87c7e037f5f19ea] <==
	I0617 11:55:46.149818       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [bcd777136f2b5925d7e9e84d688664a89943e8acd7e95e7644411fc7d2fcd989] <==
	
	
	==> kube-scheduler [e4c725bd62ff4532a663b95aebf83a21b0da0aec439883cd876af09a195b4f07] <==
	I0617 11:55:51.726582       1 serving.go:380] Generated self-signed cert in-memory
	W0617 11:55:53.877037       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0617 11:55:53.877133       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 11:55:53.877195       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0617 11:55:53.877220       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0617 11:55:53.909883       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0617 11:55:53.909917       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:55:53.914872       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0617 11:55:53.914998       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0617 11:55:53.915031       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0617 11:55:53.915402       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0617 11:55:54.016055       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:50.033551    2331 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/209bc8d070ff0daa38bf352094c261d4-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-717156\" (UID: \"209bc8d070ff0daa38bf352094c261d4\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-717156"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:50.033772    2331 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/3b7d5cd9a3e3bb686bd84449c77d45d4-etcd-data\") pod \"etcd-kubernetes-upgrade-717156\" (UID: \"3b7d5cd9a3e3bb686bd84449c77d45d4\") " pod="kube-system/etcd-kubernetes-upgrade-717156"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:50.033847    2331 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa72448844f3e0ecd480e1f8bb5cd890-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-717156\" (UID: \"fa72448844f3e0ecd480e1f8bb5cd890\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-717156"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:50.033905    2331 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/043b7d34a45c3a903d70e24da5de6728-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-717156\" (UID: \"043b7d34a45c3a903d70e24da5de6728\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-717156"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:50.033974    2331 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/043b7d34a45c3a903d70e24da5de6728-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-717156\" (UID: \"043b7d34a45c3a903d70e24da5de6728\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-717156"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:50.034103    2331 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/043b7d34a45c3a903d70e24da5de6728-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-717156\" (UID: \"043b7d34a45c3a903d70e24da5de6728\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-717156"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:50.034176    2331 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/3b7d5cd9a3e3bb686bd84449c77d45d4-etcd-certs\") pod \"etcd-kubernetes-upgrade-717156\" (UID: \"3b7d5cd9a3e3bb686bd84449c77d45d4\") " pod="kube-system/etcd-kubernetes-upgrade-717156"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:50.034225    2331 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa72448844f3e0ecd480e1f8bb5cd890-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-717156\" (UID: \"fa72448844f3e0ecd480e1f8bb5cd890\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-717156"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:50.034269    2331 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa72448844f3e0ecd480e1f8bb5cd890-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-717156\" (UID: \"fa72448844f3e0ecd480e1f8bb5cd890\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-717156"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:50.034318    2331 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/043b7d34a45c3a903d70e24da5de6728-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-717156\" (UID: \"043b7d34a45c3a903d70e24da5de6728\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-717156"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:50.034373    2331 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/043b7d34a45c3a903d70e24da5de6728-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-717156\" (UID: \"043b7d34a45c3a903d70e24da5de6728\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-717156"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:50.035499    2331 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-717156"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: E0617 11:55:50.036354    2331 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.236:8443: connect: connection refused" node="kubernetes-upgrade-717156"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:50.207958    2331 scope.go:117] "RemoveContainer" containerID="bcd777136f2b5925d7e9e84d688664a89943e8acd7e95e7644411fc7d2fcd989"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:50.209117    2331 scope.go:117] "RemoveContainer" containerID="2193ff27c0c6a22099fd14a498d3b31d580ad03227a07f5597e6afce87eec5f0"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:50.210305    2331 scope.go:117] "RemoveContainer" containerID="f4a9a4a425aec782fb37181c48421a914f923c64f241ff9c7445b88ae92d1480"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:50.210425    2331 scope.go:117] "RemoveContainer" containerID="4cbd627cfdf11348a1b66dc236a7f39d02c5c5d9cfb4e763e87c7e037f5f19ea"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: E0617 11:55:50.338510    2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-717156?timeout=10s\": dial tcp 192.168.50.236:8443: connect: connection refused" interval="800ms"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:50.438019    2331 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-717156"
	Jun 17 11:55:50 kubernetes-upgrade-717156 kubelet[2331]: E0617 11:55:50.438809    2331 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.236:8443: connect: connection refused" node="kubernetes-upgrade-717156"
	Jun 17 11:55:51 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:51.240112    2331 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-717156"
	Jun 17 11:55:54 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:54.067981    2331 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-717156"
	Jun 17 11:55:54 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:54.068422    2331 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-717156"
	Jun 17 11:55:54 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:54.719831    2331 apiserver.go:52] "Watching apiserver"
	Jun 17 11:55:54 kubernetes-upgrade-717156 kubelet[2331]: I0617 11:55:54.732470    2331 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:55:57.030923  164114 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19084-112967/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-717156 -n kubernetes-upgrade-717156
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-717156 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-717156 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-717156 describe pod storage-provisioner: exit status 1 (61.438676ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-717156 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-717156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-717156
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-717156: (1.098508531s)
--- FAIL: TestKubernetesUpgrade (371.15s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (64.26s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-475894 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-475894 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.165010534s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-475894] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19084
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-475894" primary control-plane node in "pause-475894" cluster
	* Updating the running kvm2 "pause-475894" VM ...
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-475894" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:48:43.299163  158739 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:48:43.299427  158739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:48:43.299436  158739 out.go:304] Setting ErrFile to fd 2...
	I0617 11:48:43.299441  158739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:48:43.299672  158739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:48:43.300232  158739 out.go:298] Setting JSON to false
	I0617 11:48:43.301188  158739 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5470,"bootTime":1718619453,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:48:43.301265  158739 start.go:139] virtualization: kvm guest
	I0617 11:48:43.303349  158739 out.go:177] * [pause-475894] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:48:43.305908  158739 notify.go:220] Checking for updates...
	I0617 11:48:43.307977  158739 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:48:43.309301  158739 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:48:43.310572  158739 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:48:43.311819  158739 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:48:43.312983  158739 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:48:43.317717  158739 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:48:43.319816  158739 config.go:182] Loaded profile config "pause-475894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:48:43.320412  158739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:48:43.320475  158739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:48:43.338423  158739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38005
	I0617 11:48:43.339033  158739 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:48:43.339724  158739 main.go:141] libmachine: Using API Version  1
	I0617 11:48:43.339752  158739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:48:43.340188  158739 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:48:43.340399  158739 main.go:141] libmachine: (pause-475894) Calling .DriverName
	I0617 11:48:43.340677  158739 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:48:43.341053  158739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:48:43.341101  158739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:48:43.364026  158739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0617 11:48:43.365192  158739 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:48:43.365813  158739 main.go:141] libmachine: Using API Version  1
	I0617 11:48:43.365836  158739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:48:43.366495  158739 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:48:43.366742  158739 main.go:141] libmachine: (pause-475894) Calling .DriverName
	I0617 11:48:43.410603  158739 out.go:177] * Using the kvm2 driver based on existing profile
	I0617 11:48:43.412106  158739 start.go:297] selected driver: kvm2
	I0617 11:48:43.412133  158739 start.go:901] validating driver "kvm2" against &{Name:pause-475894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.1 ClusterName:pause-475894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.122 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:48:43.412338  158739 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:48:43.412818  158739 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:48:43.412910  158739 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 11:48:43.430429  158739 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 11:48:43.431478  158739 cni.go:84] Creating CNI manager for ""
	I0617 11:48:43.431502  158739 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:48:43.431584  158739 start.go:340] cluster config:
	{Name:pause-475894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:pause-475894 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.122 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:48:43.431769  158739 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:48:43.433857  158739 out.go:177] * Starting "pause-475894" primary control-plane node in "pause-475894" cluster
	I0617 11:48:43.435370  158739 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:48:43.435427  158739 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 11:48:43.435442  158739 cache.go:56] Caching tarball of preloaded images
	I0617 11:48:43.435559  158739 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:48:43.435574  158739 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 11:48:43.435761  158739 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/pause-475894/config.json ...
	I0617 11:48:43.436026  158739 start.go:360] acquireMachinesLock for pause-475894: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:49:09.084523  158739 start.go:364] duration metric: took 25.648464369s to acquireMachinesLock for "pause-475894"
	I0617 11:49:09.084577  158739 start.go:96] Skipping create...Using existing machine configuration
	I0617 11:49:09.084583  158739 fix.go:54] fixHost starting: 
	I0617 11:49:09.085036  158739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:49:09.085092  158739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:49:09.101986  158739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0617 11:49:09.102392  158739 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:49:09.102893  158739 main.go:141] libmachine: Using API Version  1
	I0617 11:49:09.102919  158739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:49:09.103256  158739 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:49:09.103471  158739 main.go:141] libmachine: (pause-475894) Calling .DriverName
	I0617 11:49:09.103672  158739 main.go:141] libmachine: (pause-475894) Calling .GetState
	I0617 11:49:09.105117  158739 fix.go:112] recreateIfNeeded on pause-475894: state=Running err=<nil>
	W0617 11:49:09.105137  158739 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 11:49:09.107327  158739 out.go:177] * Updating the running kvm2 "pause-475894" VM ...
	I0617 11:49:09.108705  158739 machine.go:94] provisionDockerMachine start ...
	I0617 11:49:09.108727  158739 main.go:141] libmachine: (pause-475894) Calling .DriverName
	I0617 11:49:09.108960  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHHostname
	I0617 11:49:09.111743  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:09.112254  158739 main.go:141] libmachine: (pause-475894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cd:2d", ip: ""} in network mk-pause-475894: {Iface:virbr2 ExpiryTime:2024-06-17 12:47:56 +0000 UTC Type:0 Mac:52:54:00:0a:cd:2d Iaid: IPaddr:192.168.50.122 Prefix:24 Hostname:pause-475894 Clientid:01:52:54:00:0a:cd:2d}
	I0617 11:49:09.112286  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined IP address 192.168.50.122 and MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:09.112481  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHPort
	I0617 11:49:09.112657  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHKeyPath
	I0617 11:49:09.112840  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHKeyPath
	I0617 11:49:09.112995  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHUsername
	I0617 11:49:09.113196  158739 main.go:141] libmachine: Using SSH client type: native
	I0617 11:49:09.113412  158739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.122 22 <nil> <nil>}
	I0617 11:49:09.113424  158739 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 11:49:09.220585  158739 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-475894
	
	I0617 11:49:09.220613  158739 main.go:141] libmachine: (pause-475894) Calling .GetMachineName
	I0617 11:49:09.220866  158739 buildroot.go:166] provisioning hostname "pause-475894"
	I0617 11:49:09.220898  158739 main.go:141] libmachine: (pause-475894) Calling .GetMachineName
	I0617 11:49:09.221184  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHHostname
	I0617 11:49:09.223958  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:09.224363  158739 main.go:141] libmachine: (pause-475894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cd:2d", ip: ""} in network mk-pause-475894: {Iface:virbr2 ExpiryTime:2024-06-17 12:47:56 +0000 UTC Type:0 Mac:52:54:00:0a:cd:2d Iaid: IPaddr:192.168.50.122 Prefix:24 Hostname:pause-475894 Clientid:01:52:54:00:0a:cd:2d}
	I0617 11:49:09.224402  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined IP address 192.168.50.122 and MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:09.224599  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHPort
	I0617 11:49:09.224848  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHKeyPath
	I0617 11:49:09.225052  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHKeyPath
	I0617 11:49:09.225223  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHUsername
	I0617 11:49:09.225394  158739 main.go:141] libmachine: Using SSH client type: native
	I0617 11:49:09.225572  158739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.122 22 <nil> <nil>}
	I0617 11:49:09.225587  158739 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-475894 && echo "pause-475894" | sudo tee /etc/hostname
	I0617 11:49:09.352760  158739 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-475894
	
	I0617 11:49:09.352855  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHHostname
	I0617 11:49:09.356497  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:09.356900  158739 main.go:141] libmachine: (pause-475894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cd:2d", ip: ""} in network mk-pause-475894: {Iface:virbr2 ExpiryTime:2024-06-17 12:47:56 +0000 UTC Type:0 Mac:52:54:00:0a:cd:2d Iaid: IPaddr:192.168.50.122 Prefix:24 Hostname:pause-475894 Clientid:01:52:54:00:0a:cd:2d}
	I0617 11:49:09.356938  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined IP address 192.168.50.122 and MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:09.357386  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHPort
	I0617 11:49:09.357619  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHKeyPath
	I0617 11:49:09.357832  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHKeyPath
	I0617 11:49:09.357993  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHUsername
	I0617 11:49:09.358184  158739 main.go:141] libmachine: Using SSH client type: native
	I0617 11:49:09.358417  158739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.122 22 <nil> <nil>}
	I0617 11:49:09.358443  158739 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-475894' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-475894/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-475894' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 11:49:09.469188  158739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:49:09.469223  158739 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 11:49:09.469248  158739 buildroot.go:174] setting up certificates
	I0617 11:49:09.469261  158739 provision.go:84] configureAuth start
	I0617 11:49:09.469278  158739 main.go:141] libmachine: (pause-475894) Calling .GetMachineName
	I0617 11:49:09.469619  158739 main.go:141] libmachine: (pause-475894) Calling .GetIP
	I0617 11:49:09.472686  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:09.473085  158739 main.go:141] libmachine: (pause-475894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cd:2d", ip: ""} in network mk-pause-475894: {Iface:virbr2 ExpiryTime:2024-06-17 12:47:56 +0000 UTC Type:0 Mac:52:54:00:0a:cd:2d Iaid: IPaddr:192.168.50.122 Prefix:24 Hostname:pause-475894 Clientid:01:52:54:00:0a:cd:2d}
	I0617 11:49:09.473116  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined IP address 192.168.50.122 and MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:09.473261  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHHostname
	I0617 11:49:09.475779  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:09.476215  158739 main.go:141] libmachine: (pause-475894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cd:2d", ip: ""} in network mk-pause-475894: {Iface:virbr2 ExpiryTime:2024-06-17 12:47:56 +0000 UTC Type:0 Mac:52:54:00:0a:cd:2d Iaid: IPaddr:192.168.50.122 Prefix:24 Hostname:pause-475894 Clientid:01:52:54:00:0a:cd:2d}
	I0617 11:49:09.476242  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined IP address 192.168.50.122 and MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:09.476387  158739 provision.go:143] copyHostCerts
	I0617 11:49:09.476467  158739 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 11:49:09.476482  158739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:49:09.476550  158739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 11:49:09.476670  158739 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 11:49:09.476686  158739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:49:09.476720  158739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 11:49:09.476825  158739 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 11:49:09.476839  158739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:49:09.476869  158739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 11:49:09.476937  158739 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.pause-475894 san=[127.0.0.1 192.168.50.122 localhost minikube pause-475894]
	I0617 11:49:09.626185  158739 provision.go:177] copyRemoteCerts
	I0617 11:49:09.626269  158739 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 11:49:09.626317  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHHostname
	I0617 11:49:09.629433  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:09.629848  158739 main.go:141] libmachine: (pause-475894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cd:2d", ip: ""} in network mk-pause-475894: {Iface:virbr2 ExpiryTime:2024-06-17 12:47:56 +0000 UTC Type:0 Mac:52:54:00:0a:cd:2d Iaid: IPaddr:192.168.50.122 Prefix:24 Hostname:pause-475894 Clientid:01:52:54:00:0a:cd:2d}
	I0617 11:49:09.629876  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined IP address 192.168.50.122 and MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:09.630102  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHPort
	I0617 11:49:09.630324  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHKeyPath
	I0617 11:49:09.630511  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHUsername
	I0617 11:49:09.630689  158739 sshutil.go:53] new ssh client: &{IP:192.168.50.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/pause-475894/id_rsa Username:docker}
	I0617 11:49:09.719314  158739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0617 11:49:09.754047  158739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0617 11:49:09.783110  158739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 11:49:09.811644  158739 provision.go:87] duration metric: took 342.363328ms to configureAuth
	I0617 11:49:09.811677  158739 buildroot.go:189] setting minikube options for container-runtime
	I0617 11:49:09.811950  158739 config.go:182] Loaded profile config "pause-475894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:49:09.812025  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHHostname
	I0617 11:49:09.815165  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:09.815688  158739 main.go:141] libmachine: (pause-475894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cd:2d", ip: ""} in network mk-pause-475894: {Iface:virbr2 ExpiryTime:2024-06-17 12:47:56 +0000 UTC Type:0 Mac:52:54:00:0a:cd:2d Iaid: IPaddr:192.168.50.122 Prefix:24 Hostname:pause-475894 Clientid:01:52:54:00:0a:cd:2d}
	I0617 11:49:09.815717  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined IP address 192.168.50.122 and MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:09.816050  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHPort
	I0617 11:49:09.816320  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHKeyPath
	I0617 11:49:09.816518  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHKeyPath
	I0617 11:49:09.816691  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHUsername
	I0617 11:49:09.816893  158739 main.go:141] libmachine: Using SSH client type: native
	I0617 11:49:09.817103  158739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.122 22 <nil> <nil>}
	I0617 11:49:09.817122  158739 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 11:49:15.348415  158739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 11:49:15.348445  158739 machine.go:97] duration metric: took 6.239721676s to provisionDockerMachine
	I0617 11:49:15.348458  158739 start.go:293] postStartSetup for "pause-475894" (driver="kvm2")
	I0617 11:49:15.348468  158739 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 11:49:15.348490  158739 main.go:141] libmachine: (pause-475894) Calling .DriverName
	I0617 11:49:15.348888  158739 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 11:49:15.348922  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHHostname
	I0617 11:49:15.351423  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:15.351813  158739 main.go:141] libmachine: (pause-475894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cd:2d", ip: ""} in network mk-pause-475894: {Iface:virbr2 ExpiryTime:2024-06-17 12:47:56 +0000 UTC Type:0 Mac:52:54:00:0a:cd:2d Iaid: IPaddr:192.168.50.122 Prefix:24 Hostname:pause-475894 Clientid:01:52:54:00:0a:cd:2d}
	I0617 11:49:15.351846  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined IP address 192.168.50.122 and MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:15.351953  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHPort
	I0617 11:49:15.352142  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHKeyPath
	I0617 11:49:15.352316  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHUsername
	I0617 11:49:15.352470  158739 sshutil.go:53] new ssh client: &{IP:192.168.50.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/pause-475894/id_rsa Username:docker}
	I0617 11:49:15.434102  158739 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 11:49:15.438275  158739 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 11:49:15.438298  158739 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 11:49:15.438367  158739 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 11:49:15.438457  158739 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 11:49:15.438562  158739 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 11:49:15.448011  158739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:49:15.472927  158739 start.go:296] duration metric: took 124.457278ms for postStartSetup
	I0617 11:49:15.472966  158739 fix.go:56] duration metric: took 6.388381336s for fixHost
	I0617 11:49:15.472993  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHHostname
	I0617 11:49:15.475529  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:15.475984  158739 main.go:141] libmachine: (pause-475894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cd:2d", ip: ""} in network mk-pause-475894: {Iface:virbr2 ExpiryTime:2024-06-17 12:47:56 +0000 UTC Type:0 Mac:52:54:00:0a:cd:2d Iaid: IPaddr:192.168.50.122 Prefix:24 Hostname:pause-475894 Clientid:01:52:54:00:0a:cd:2d}
	I0617 11:49:15.476018  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined IP address 192.168.50.122 and MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:15.476133  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHPort
	I0617 11:49:15.476293  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHKeyPath
	I0617 11:49:15.476437  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHKeyPath
	I0617 11:49:15.476628  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHUsername
	I0617 11:49:15.476842  158739 main.go:141] libmachine: Using SSH client type: native
	I0617 11:49:15.476996  158739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.122 22 <nil> <nil>}
	I0617 11:49:15.477012  158739 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0617 11:49:15.580090  158739 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718624955.563570012
	
	I0617 11:49:15.580116  158739 fix.go:216] guest clock: 1718624955.563570012
	I0617 11:49:15.580125  158739 fix.go:229] Guest: 2024-06-17 11:49:15.563570012 +0000 UTC Remote: 2024-06-17 11:49:15.472971181 +0000 UTC m=+32.225982182 (delta=90.598831ms)
	I0617 11:49:15.580169  158739 fix.go:200] guest clock delta is within tolerance: 90.598831ms
	I0617 11:49:15.580174  158739 start.go:83] releasing machines lock for "pause-475894", held for 6.495628915s
	I0617 11:49:15.580193  158739 main.go:141] libmachine: (pause-475894) Calling .DriverName
	I0617 11:49:15.580476  158739 main.go:141] libmachine: (pause-475894) Calling .GetIP
	I0617 11:49:15.583038  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:15.583390  158739 main.go:141] libmachine: (pause-475894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cd:2d", ip: ""} in network mk-pause-475894: {Iface:virbr2 ExpiryTime:2024-06-17 12:47:56 +0000 UTC Type:0 Mac:52:54:00:0a:cd:2d Iaid: IPaddr:192.168.50.122 Prefix:24 Hostname:pause-475894 Clientid:01:52:54:00:0a:cd:2d}
	I0617 11:49:15.583429  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined IP address 192.168.50.122 and MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:15.583546  158739 main.go:141] libmachine: (pause-475894) Calling .DriverName
	I0617 11:49:15.584071  158739 main.go:141] libmachine: (pause-475894) Calling .DriverName
	I0617 11:49:15.584284  158739 main.go:141] libmachine: (pause-475894) Calling .DriverName
	I0617 11:49:15.584388  158739 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 11:49:15.584446  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHHostname
	I0617 11:49:15.584546  158739 ssh_runner.go:195] Run: cat /version.json
	I0617 11:49:15.584575  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHHostname
	I0617 11:49:15.587008  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:15.587303  158739 main.go:141] libmachine: (pause-475894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cd:2d", ip: ""} in network mk-pause-475894: {Iface:virbr2 ExpiryTime:2024-06-17 12:47:56 +0000 UTC Type:0 Mac:52:54:00:0a:cd:2d Iaid: IPaddr:192.168.50.122 Prefix:24 Hostname:pause-475894 Clientid:01:52:54:00:0a:cd:2d}
	I0617 11:49:15.587327  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined IP address 192.168.50.122 and MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:15.587346  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:15.587551  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHPort
	I0617 11:49:15.587759  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHKeyPath
	I0617 11:49:15.587831  158739 main.go:141] libmachine: (pause-475894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cd:2d", ip: ""} in network mk-pause-475894: {Iface:virbr2 ExpiryTime:2024-06-17 12:47:56 +0000 UTC Type:0 Mac:52:54:00:0a:cd:2d Iaid: IPaddr:192.168.50.122 Prefix:24 Hostname:pause-475894 Clientid:01:52:54:00:0a:cd:2d}
	I0617 11:49:15.587858  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined IP address 192.168.50.122 and MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:15.587933  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHUsername
	I0617 11:49:15.588038  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHPort
	I0617 11:49:15.588106  158739 sshutil.go:53] new ssh client: &{IP:192.168.50.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/pause-475894/id_rsa Username:docker}
	I0617 11:49:15.588184  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHKeyPath
	I0617 11:49:15.588318  158739 main.go:141] libmachine: (pause-475894) Calling .GetSSHUsername
	I0617 11:49:15.588483  158739 sshutil.go:53] new ssh client: &{IP:192.168.50.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/pause-475894/id_rsa Username:docker}
	I0617 11:49:15.664858  158739 ssh_runner.go:195] Run: systemctl --version
	I0617 11:49:15.687375  158739 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 11:49:15.850212  158739 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 11:49:15.867407  158739 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 11:49:15.867514  158739 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 11:49:15.879478  158739 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0617 11:49:15.879507  158739 start.go:494] detecting cgroup driver to use...
	I0617 11:49:15.879569  158739 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 11:49:15.901202  158739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 11:49:15.917746  158739 docker.go:217] disabling cri-docker service (if available) ...
	I0617 11:49:15.917803  158739 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 11:49:15.933930  158739 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 11:49:15.949195  158739 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 11:49:16.124373  158739 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 11:49:16.321939  158739 docker.go:233] disabling docker service ...
	I0617 11:49:16.322012  158739 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 11:49:16.400722  158739 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 11:49:16.496098  158739 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 11:49:16.763382  158739 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 11:49:17.043665  158739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 11:49:17.081704  158739 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 11:49:17.128429  158739 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 11:49:17.128525  158739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:49:17.142654  158739 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 11:49:17.142778  158739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:49:17.156374  158739 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:49:17.174375  158739 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:49:17.186421  158739 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 11:49:17.200575  158739 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:49:17.212954  158739 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:49:17.224371  158739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:49:17.241263  158739 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 11:49:17.262048  158739 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 11:49:17.315447  158739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:49:17.563402  158739 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 11:49:18.073728  158739 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 11:49:18.073813  158739 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 11:49:18.079238  158739 start.go:562] Will wait 60s for crictl version
	I0617 11:49:18.079298  158739 ssh_runner.go:195] Run: which crictl
	I0617 11:49:18.084120  158739 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 11:49:18.245069  158739 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 11:49:18.245155  158739 ssh_runner.go:195] Run: crio --version
	I0617 11:49:18.497735  158739 ssh_runner.go:195] Run: crio --version
	I0617 11:49:18.583533  158739 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 11:49:18.584898  158739 main.go:141] libmachine: (pause-475894) Calling .GetIP
	I0617 11:49:18.587867  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:18.588290  158739 main.go:141] libmachine: (pause-475894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cd:2d", ip: ""} in network mk-pause-475894: {Iface:virbr2 ExpiryTime:2024-06-17 12:47:56 +0000 UTC Type:0 Mac:52:54:00:0a:cd:2d Iaid: IPaddr:192.168.50.122 Prefix:24 Hostname:pause-475894 Clientid:01:52:54:00:0a:cd:2d}
	I0617 11:49:18.588320  158739 main.go:141] libmachine: (pause-475894) DBG | domain pause-475894 has defined IP address 192.168.50.122 and MAC address 52:54:00:0a:cd:2d in network mk-pause-475894
	I0617 11:49:18.588574  158739 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0617 11:49:18.593106  158739 kubeadm.go:877] updating cluster {Name:pause-475894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:pause-475894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.122 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 11:49:18.593437  158739 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:49:18.593549  158739 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:49:18.648253  158739 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 11:49:18.648276  158739 crio.go:433] Images already preloaded, skipping extraction
	I0617 11:49:18.648322  158739 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:49:18.710949  158739 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 11:49:18.710976  158739 cache_images.go:84] Images are preloaded, skipping loading
	I0617 11:49:18.710984  158739 kubeadm.go:928] updating node { 192.168.50.122 8443 v1.30.1 crio true true} ...
	I0617 11:49:18.711123  158739 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-475894 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.122
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:pause-475894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 11:49:18.711216  158739 ssh_runner.go:195] Run: crio config
	I0617 11:49:18.776859  158739 cni.go:84] Creating CNI manager for ""
	I0617 11:49:18.776887  158739 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:49:18.776903  158739 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 11:49:18.776936  158739 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.122 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-475894 NodeName:pause-475894 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.122"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.122 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 11:49:18.777112  158739 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.122
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-475894"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.122
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.122"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 11:49:18.777190  158739 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 11:49:18.809479  158739 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 11:49:18.809559  158739 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 11:49:18.819624  158739 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0617 11:49:18.836343  158739 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 11:49:18.853490  158739 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0617 11:49:18.872148  158739 ssh_runner.go:195] Run: grep 192.168.50.122	control-plane.minikube.internal$ /etc/hosts
	I0617 11:49:18.876230  158739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:49:19.015264  158739 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:49:19.030332  158739 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/pause-475894 for IP: 192.168.50.122
	I0617 11:49:19.030361  158739 certs.go:194] generating shared ca certs ...
	I0617 11:49:19.030382  158739 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:49:19.030538  158739 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 11:49:19.030589  158739 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 11:49:19.030600  158739 certs.go:256] generating profile certs ...
	I0617 11:49:19.030689  158739 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/pause-475894/client.key
	I0617 11:49:19.030780  158739 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/pause-475894/apiserver.key.9a124b7b
	I0617 11:49:19.030845  158739 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/pause-475894/proxy-client.key
	I0617 11:49:19.030961  158739 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 11:49:19.030990  158739 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 11:49:19.030997  158739 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 11:49:19.031018  158739 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 11:49:19.031043  158739 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 11:49:19.031069  158739 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 11:49:19.031107  158739 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:49:19.031727  158739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 11:49:19.057163  158739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 11:49:19.081076  158739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 11:49:19.105567  158739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 11:49:19.129910  158739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/pause-475894/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0617 11:49:19.153895  158739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/pause-475894/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0617 11:49:19.177522  158739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/pause-475894/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 11:49:19.202385  158739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/pause-475894/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 11:49:19.260234  158739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 11:49:19.284047  158739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 11:49:19.308294  158739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 11:49:19.332294  158739 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 11:49:19.348609  158739 ssh_runner.go:195] Run: openssl version
	I0617 11:49:19.354288  158739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 11:49:19.365622  158739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 11:49:19.370152  158739 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 11:49:19.370219  158739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 11:49:19.375787  158739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 11:49:19.385642  158739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 11:49:19.398047  158739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:49:19.402909  158739 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:49:19.402961  158739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:49:19.408469  158739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 11:49:19.418583  158739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 11:49:19.430183  158739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 11:49:19.435266  158739 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 11:49:19.435322  158739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 11:49:19.441303  158739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 11:49:19.451422  158739 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 11:49:19.456350  158739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 11:49:19.462285  158739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 11:49:19.468268  158739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 11:49:19.474057  158739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 11:49:19.480097  158739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 11:49:19.486099  158739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 11:49:19.492199  158739 kubeadm.go:391] StartCluster: {Name:pause-475894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:pause-475894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.122 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:49:19.492308  158739 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 11:49:19.492347  158739 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 11:49:19.532874  158739 cri.go:89] found id: "b47ea14b20b2545c7063c0efde0b1693c228c5b8a0c5468109e340db79cbd18b"
	I0617 11:49:19.532898  158739 cri.go:89] found id: "50797d5733f6a349cbb691d979717866beb3e73a560d51d4df857c969b0db3a1"
	I0617 11:49:19.532903  158739 cri.go:89] found id: "e7d21c1e0a0850daab8094237a2bed1e59ce3f7d540d76a69c7405f5d603ef40"
	I0617 11:49:19.532907  158739 cri.go:89] found id: "35d99078f029ec2aa0d136fb2fd2494ac10bebfa02fe3f231f166c05a1ec6665"
	I0617 11:49:19.532910  158739 cri.go:89] found id: "cbca6f4efef12044d1eda46a9d9a73e2a2f2ce5910f123ee53ca237230019480"
	I0617 11:49:19.532915  158739 cri.go:89] found id: "4fe5cbc7009bd88a153d92ce34fc2267ee234d73b03bf648be4f82592c7a4277"
	I0617 11:49:19.532919  158739 cri.go:89] found id: ""
	I0617 11:49:19.532974  158739 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-475894 -n pause-475894
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-475894 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-475894 logs -n 25: (1.460362182s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-253383 sudo docker                         | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo                                | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo                                | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo cat                            | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo cat                            | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo                                | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo                                | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo                                | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo cat                            | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo cat                            | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo                                | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo                                | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo                                | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo find                           | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo crio                           | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-253383                                     | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC | 17 Jun 24 11:47 UTC |
	| start   | -p cert-expiration-514753                            | cert-expiration-514753    | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC | 17 Jun 24 11:48 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-846787                               | NoKubernetes-846787       | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC | 17 Jun 24 11:48 UTC |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p running-upgrade-869541                            | running-upgrade-869541    | jenkins | v1.33.1 | 17 Jun 24 11:48 UTC | 17 Jun 24 11:49 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-846787                               | NoKubernetes-846787       | jenkins | v1.33.1 | 17 Jun 24 11:48 UTC | 17 Jun 24 11:48 UTC |
	| start   | -p NoKubernetes-846787                               | NoKubernetes-846787       | jenkins | v1.33.1 | 17 Jun 24 11:48 UTC | 17 Jun 24 11:49 UTC |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-475894                                      | pause-475894              | jenkins | v1.33.1 | 17 Jun 24 11:48 UTC | 17 Jun 24 11:49 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-846787 sudo                          | NoKubernetes-846787       | jenkins | v1.33.1 | 17 Jun 24 11:49 UTC |                     |
	|         | systemctl is-active --quiet                          |                           |         |         |                     |                     |
	|         | service kubelet                                      |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-869541                            | running-upgrade-869541    | jenkins | v1.33.1 | 17 Jun 24 11:49 UTC | 17 Jun 24 11:49 UTC |
	| start   | -p force-systemd-flag-855883                         | force-systemd-flag-855883 | jenkins | v1.33.1 | 17 Jun 24 11:49 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 11:49:39
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 11:49:39.741194  159466 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:49:39.741304  159466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:49:39.741315  159466 out.go:304] Setting ErrFile to fd 2...
	I0617 11:49:39.741322  159466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:49:39.741501  159466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:49:39.742067  159466 out.go:298] Setting JSON to false
	I0617 11:49:39.742968  159466 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5527,"bootTime":1718619453,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:49:39.743071  159466 start.go:139] virtualization: kvm guest
	I0617 11:49:39.745148  159466 out.go:177] * [force-systemd-flag-855883] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:49:39.746516  159466 notify.go:220] Checking for updates...
	I0617 11:49:39.746522  159466 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:49:39.747957  159466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:49:39.749258  159466 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:49:39.750422  159466 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:49:39.751634  159466 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:49:39.752896  159466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:49:39.754518  159466 config.go:182] Loaded profile config "NoKubernetes-846787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0617 11:49:39.754649  159466 config.go:182] Loaded profile config "cert-expiration-514753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:49:39.754828  159466 config.go:182] Loaded profile config "pause-475894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:49:39.754931  159466 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:49:39.790559  159466 out.go:177] * Using the kvm2 driver based on user configuration
	I0617 11:49:39.791946  159466 start.go:297] selected driver: kvm2
	I0617 11:49:39.791966  159466 start.go:901] validating driver "kvm2" against <nil>
	I0617 11:49:39.791979  159466 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:49:39.792825  159466 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:49:39.792909  159466 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 11:49:39.809220  159466 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 11:49:39.809283  159466 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 11:49:39.809512  159466 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0617 11:49:39.809579  159466 cni.go:84] Creating CNI manager for ""
	I0617 11:49:39.809606  159466 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:49:39.809614  159466 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 11:49:39.809703  159466 start.go:340] cluster config:
	{Name:force-systemd-flag-855883 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-855883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:49:39.809879  159466 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:49:39.812600  159466 out.go:177] * Starting "force-systemd-flag-855883" primary control-plane node in "force-systemd-flag-855883" cluster
	I0617 11:49:39.813890  159466 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:49:39.813938  159466 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 11:49:39.813952  159466 cache.go:56] Caching tarball of preloaded images
	I0617 11:49:39.814075  159466 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:49:39.814091  159466 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 11:49:39.814213  159466 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/force-systemd-flag-855883/config.json ...
	I0617 11:49:39.814237  159466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/force-systemd-flag-855883/config.json: {Name:mk36ec0b8edfeadf1d0e471ded5cf9f61f0ba805 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:49:39.814371  159466 start.go:360] acquireMachinesLock for force-systemd-flag-855883: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:49:39.814407  159466 start.go:364] duration metric: took 17.701µs to acquireMachinesLock for "force-systemd-flag-855883"
	I0617 11:49:39.814424  159466 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-855883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-855883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:49:39.814475  159466 start.go:125] createHost starting for "" (driver="kvm2")
	I0617 11:49:39.753135  158739 pod_ready.go:102] pod "kube-controller-manager-pause-475894" in "kube-system" namespace has status "Ready":"False"
	I0617 11:49:40.251640  158739 pod_ready.go:92] pod "kube-controller-manager-pause-475894" in "kube-system" namespace has status "Ready":"True"
	I0617 11:49:40.251666  158739 pod_ready.go:81] duration metric: took 2.506857229s for pod "kube-controller-manager-pause-475894" in "kube-system" namespace to be "Ready" ...
	I0617 11:49:40.251679  158739 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-shbhn" in "kube-system" namespace to be "Ready" ...
	I0617 11:49:40.259897  158739 pod_ready.go:92] pod "kube-proxy-shbhn" in "kube-system" namespace has status "Ready":"True"
	I0617 11:49:40.259920  158739 pod_ready.go:81] duration metric: took 8.233219ms for pod "kube-proxy-shbhn" in "kube-system" namespace to be "Ready" ...
	I0617 11:49:40.259932  158739 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-475894" in "kube-system" namespace to be "Ready" ...
	I0617 11:49:40.267452  158739 pod_ready.go:92] pod "kube-scheduler-pause-475894" in "kube-system" namespace has status "Ready":"True"
	I0617 11:49:40.267498  158739 pod_ready.go:81] duration metric: took 7.555176ms for pod "kube-scheduler-pause-475894" in "kube-system" namespace to be "Ready" ...
	I0617 11:49:40.267508  158739 pod_ready.go:38] duration metric: took 13.05444673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 11:49:40.267543  158739 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 11:49:40.284180  158739 ops.go:34] apiserver oom_adj: -16
	I0617 11:49:40.284200  158739 kubeadm.go:591] duration metric: took 20.693730761s to restartPrimaryControlPlane
	I0617 11:49:40.284211  158739 kubeadm.go:393] duration metric: took 20.792021568s to StartCluster
	I0617 11:49:40.284251  158739 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:49:40.284328  158739 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:49:40.285703  158739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:49:40.285983  158739 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.122 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:49:40.288656  158739 out.go:177] * Verifying Kubernetes components...
	I0617 11:49:40.286085  158739 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 11:49:40.286292  158739 config.go:182] Loaded profile config "pause-475894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:49:40.290071  158739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:49:40.291532  158739 out.go:177] * Enabled addons: 
	I0617 11:49:40.292895  158739 addons.go:510] duration metric: took 6.811292ms for enable addons: enabled=[]
	I0617 11:49:40.456990  158739 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:49:40.473526  158739 node_ready.go:35] waiting up to 6m0s for node "pause-475894" to be "Ready" ...
	I0617 11:49:40.476965  158739 node_ready.go:49] node "pause-475894" has status "Ready":"True"
	I0617 11:49:40.476988  158739 node_ready.go:38] duration metric: took 3.429477ms for node "pause-475894" to be "Ready" ...
	I0617 11:49:40.476996  158739 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 11:49:40.481886  158739 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ng69p" in "kube-system" namespace to be "Ready" ...
	I0617 11:49:40.535345  158739 pod_ready.go:92] pod "coredns-7db6d8ff4d-ng69p" in "kube-system" namespace has status "Ready":"True"
	I0617 11:49:40.535370  158739 pod_ready.go:81] duration metric: took 53.458175ms for pod "coredns-7db6d8ff4d-ng69p" in "kube-system" namespace to be "Ready" ...
	I0617 11:49:40.535382  158739 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-475894" in "kube-system" namespace to be "Ready" ...
	I0617 11:49:40.935938  158739 pod_ready.go:92] pod "etcd-pause-475894" in "kube-system" namespace has status "Ready":"True"
	I0617 11:49:40.935967  158739 pod_ready.go:81] duration metric: took 400.576064ms for pod "etcd-pause-475894" in "kube-system" namespace to be "Ready" ...
	I0617 11:49:40.935981  158739 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-475894" in "kube-system" namespace to be "Ready" ...
	I0617 11:49:41.336347  158739 pod_ready.go:92] pod "kube-apiserver-pause-475894" in "kube-system" namespace has status "Ready":"True"
	I0617 11:49:41.336373  158739 pod_ready.go:81] duration metric: took 400.383806ms for pod "kube-apiserver-pause-475894" in "kube-system" namespace to be "Ready" ...
	I0617 11:49:41.336383  158739 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-475894" in "kube-system" namespace to be "Ready" ...
	I0617 11:49:41.735912  158739 pod_ready.go:92] pod "kube-controller-manager-pause-475894" in "kube-system" namespace has status "Ready":"True"
	I0617 11:49:41.735938  158739 pod_ready.go:81] duration metric: took 399.549002ms for pod "kube-controller-manager-pause-475894" in "kube-system" namespace to be "Ready" ...
	I0617 11:49:41.735949  158739 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-shbhn" in "kube-system" namespace to be "Ready" ...
	I0617 11:49:42.136779  158739 pod_ready.go:92] pod "kube-proxy-shbhn" in "kube-system" namespace has status "Ready":"True"
	I0617 11:49:42.136802  158739 pod_ready.go:81] duration metric: took 400.845593ms for pod "kube-proxy-shbhn" in "kube-system" namespace to be "Ready" ...
	I0617 11:49:42.136813  158739 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-475894" in "kube-system" namespace to be "Ready" ...
	I0617 11:49:42.537869  158739 pod_ready.go:92] pod "kube-scheduler-pause-475894" in "kube-system" namespace has status "Ready":"True"
	I0617 11:49:42.537895  158739 pod_ready.go:81] duration metric: took 401.073261ms for pod "kube-scheduler-pause-475894" in "kube-system" namespace to be "Ready" ...
	I0617 11:49:42.537905  158739 pod_ready.go:38] duration metric: took 2.060899782s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 11:49:42.537920  158739 api_server.go:52] waiting for apiserver process to appear ...
	I0617 11:49:42.537964  158739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:49:42.555344  158739 api_server.go:72] duration metric: took 2.269324698s to wait for apiserver process to appear ...
	I0617 11:49:42.555369  158739 api_server.go:88] waiting for apiserver healthz status ...
	I0617 11:49:42.555386  158739 api_server.go:253] Checking apiserver healthz at https://192.168.50.122:8443/healthz ...
	I0617 11:49:42.559610  158739 api_server.go:279] https://192.168.50.122:8443/healthz returned 200:
	ok
	I0617 11:49:42.560726  158739 api_server.go:141] control plane version: v1.30.1
	I0617 11:49:42.560748  158739 api_server.go:131] duration metric: took 5.373321ms to wait for apiserver health ...
	I0617 11:49:42.560759  158739 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 11:49:42.739041  158739 system_pods.go:59] 6 kube-system pods found
	I0617 11:49:42.739071  158739 system_pods.go:61] "coredns-7db6d8ff4d-ng69p" [1f7df81e-d372-415e-a2ff-b6d968634f17] Running
	I0617 11:49:42.739078  158739 system_pods.go:61] "etcd-pause-475894" [f2d81816-c372-4ea1-a700-aeb85d2d2ee8] Running
	I0617 11:49:42.739084  158739 system_pods.go:61] "kube-apiserver-pause-475894" [7e42f231-419b-4a7d-808e-ac41dfe26e82] Running
	I0617 11:49:42.739089  158739 system_pods.go:61] "kube-controller-manager-pause-475894" [29b7523a-7d5c-4e04-a1d7-794e88dbc27a] Running
	I0617 11:49:42.739094  158739 system_pods.go:61] "kube-proxy-shbhn" [a369273f-a5a9-4d41-bacf-f6ba17fecc7f] Running
	I0617 11:49:42.739098  158739 system_pods.go:61] "kube-scheduler-pause-475894" [ee5c359b-8742-4d79-a423-d69adc0ab835] Running
	I0617 11:49:42.739109  158739 system_pods.go:74] duration metric: took 178.340844ms to wait for pod list to return data ...
	I0617 11:49:42.739119  158739 default_sa.go:34] waiting for default service account to be created ...
	I0617 11:49:42.936041  158739 default_sa.go:45] found service account: "default"
	I0617 11:49:42.936076  158739 default_sa.go:55] duration metric: took 196.944793ms for default service account to be created ...
	I0617 11:49:42.936090  158739 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 11:49:43.138639  158739 system_pods.go:86] 6 kube-system pods found
	I0617 11:49:43.138666  158739 system_pods.go:89] "coredns-7db6d8ff4d-ng69p" [1f7df81e-d372-415e-a2ff-b6d968634f17] Running
	I0617 11:49:43.138671  158739 system_pods.go:89] "etcd-pause-475894" [f2d81816-c372-4ea1-a700-aeb85d2d2ee8] Running
	I0617 11:49:43.138675  158739 system_pods.go:89] "kube-apiserver-pause-475894" [7e42f231-419b-4a7d-808e-ac41dfe26e82] Running
	I0617 11:49:43.138679  158739 system_pods.go:89] "kube-controller-manager-pause-475894" [29b7523a-7d5c-4e04-a1d7-794e88dbc27a] Running
	I0617 11:49:43.138683  158739 system_pods.go:89] "kube-proxy-shbhn" [a369273f-a5a9-4d41-bacf-f6ba17fecc7f] Running
	I0617 11:49:43.138687  158739 system_pods.go:89] "kube-scheduler-pause-475894" [ee5c359b-8742-4d79-a423-d69adc0ab835] Running
	I0617 11:49:43.138693  158739 system_pods.go:126] duration metric: took 202.597804ms to wait for k8s-apps to be running ...
	I0617 11:49:43.138714  158739 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 11:49:43.138776  158739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:49:43.154998  158739 system_svc.go:56] duration metric: took 16.273306ms WaitForService to wait for kubelet
	I0617 11:49:43.155031  158739 kubeadm.go:576] duration metric: took 2.869014489s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:49:43.155056  158739 node_conditions.go:102] verifying NodePressure condition ...
	I0617 11:49:43.335914  158739 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 11:49:43.335954  158739 node_conditions.go:123] node cpu capacity is 2
	I0617 11:49:43.335971  158739 node_conditions.go:105] duration metric: took 180.907499ms to run NodePressure ...
	I0617 11:49:43.335987  158739 start.go:240] waiting for startup goroutines ...
	I0617 11:49:43.335998  158739 start.go:245] waiting for cluster config update ...
	I0617 11:49:43.336009  158739 start.go:254] writing updated cluster config ...
	I0617 11:49:43.336394  158739 ssh_runner.go:195] Run: rm -f paused
	I0617 11:49:43.387687  158739 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 11:49:43.389455  158739 out.go:177] * Done! kubectl is now configured to use "pause-475894" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.088847475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624984088821306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03f3134f-2164-4f79-92c4-de5a57c858da name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.089402087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c03e41f-8bf6-4dfd-8113-a36382bd4ab8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.089525251Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c03e41f-8bf6-4dfd-8113-a36382bd4ab8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.089775529Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:031e0591e01f99311468a0c3e22881b3cf0edd08d8a55a91d19a4315e7ca8d9b,PodSandboxId:c5c94066e41bbea108463d59bf45e8889f1472954ca6307d48e2964049065d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718624966073496172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng69p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f7df81e-d372-415e-a2ff-b6d968634f17,},Annotations:map[string]string{io.kubernetes.container.hash: 933086ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c968fd255dc050f201358575199ba46f641ea62c82eaba801bd619edd6735d,PodSandboxId:bbc8065b112213be87c70cb6a14d570495f9ae744d718bd2b506d91cf28962dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718624966081970627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shbhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a369273f-a5a9-4d41-bacf-f6ba17fecc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 45074f35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3931fd77c7bb5d762c4fac82aea53727e94c648ddcde8f559db233fd38fc7f,PodSandboxId:edb730bc446d8faf2cbcb62c29a82dbeee124a3049598907bd22780292a4b12c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718624962282100900,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a82211bd8
705a84cc888ee5f3b987ec,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9977fa88b78c127b41c1daa401e19fe446949b30cb24260553cd3d06b4f24447,PodSandboxId:de516574b62038394a5d52a38e9209e6f871554b210654709b4eb00dc633c2e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718624962230969412,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61ff83ba0d593e9b63e151493dfd41f,},Annotations:map[string]
string{io.kubernetes.container.hash: 3188295c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96225c5dfdfc5e3bac34bad966144bacc8fad8dc72438c99648d25763153e382,PodSandboxId:42490894c6cb3c4beb1c5eba282aa8410127a1e6c6cd46853081b69b42237ae6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718624962268582718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e84bd303c3421d855ba0c1538f215f1a,},Annotations:map[string]string{io.kubernet
es.container.hash: 9379a720,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75825922cbbd04ba3592edbc69153ac135cd007fa1e43379e53c7c8e0d24394,PodSandboxId:06a4d3c8369dfe06743b6f9c475a75eb4919563575d526d9ac235dae70718329,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718624962254307900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9f9919d4b1badc17cda6d1885676de,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47ea14b20b2545c7063c0efde0b1693c228c5b8a0c5468109e340db79cbd18b,PodSandboxId:023f6139f01f76d8e4d9cebe4583ff988bbeba58fed17e97f0079e8833995121,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718624957274482883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng69p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f7df81e-d372-415e-a2ff-b6d968634f17,},Annotations:map[string]string{io.kubernetes.container.hash: 9330
86ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbca6f4efef12044d1eda46a9d9a73e2a2f2ce5910f123ee53ca237230019480,PodSandboxId:6d4a98a98ade14890c53d334d31b7a4328e40f7f6bdbb0df593c73c433cc6eb6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718624956519118133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-shbhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a369273f-a5a9-4d41-bacf-f6ba17fecc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 45074f35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50797d5733f6a349cbb691d979717866beb3e73a560d51d4df857c969b0db3a1,PodSandboxId:528b279658ebc220b45804283b1f0c61642e9c6279106c79c8a900000eef42aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718624956704644233,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-475894,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d61ff83ba0d593e9b63e151493dfd41f,},Annotations:map[string]string{io.kubernetes.container.hash: 3188295c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d21c1e0a0850daab8094237a2bed1e59ce3f7d540d76a69c7405f5d603ef40,PodSandboxId:6e798aa6a6823a488eb09a4bbab9451b0c0ba383cb0f7138275cf2ad683f366b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718624956619024371,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-475894,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9f9919d4b1badc17cda6d1885676de,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d99078f029ec2aa0d136fb2fd2494ac10bebfa02fe3f231f166c05a1ec6665,PodSandboxId:f341ed94621e7a1aca316130fed7b36711293ea6c47ce98b3ec5f5a2efc4ba18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718624956581320129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-475894,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e84bd303c3421d855ba0c1538f215f1a,},Annotations:map[string]string{io.kubernetes.container.hash: 9379a720,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe5cbc7009bd88a153d92ce34fc2267ee234d73b03bf648be4f82592c7a4277,PodSandboxId:de2a5e08441241d2d430bdd4d3215bb340c0c816a54887f1e3636ed716d49b9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718624956350357609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9a82211bd8705a84cc888ee5f3b987ec,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c03e41f-8bf6-4dfd-8113-a36382bd4ab8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.139497793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b451b84-302e-4d38-a351-2741a05b8df7 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.139616854Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b451b84-302e-4d38-a351-2741a05b8df7 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.140979347Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c32e60b7-9966-46a2-962b-7fe940f57826 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.141629230Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624984141550580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c32e60b7-9966-46a2-962b-7fe940f57826 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.142219493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5600c105-0cdf-40fb-8036-c1e2fa0c3867 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.142313293Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5600c105-0cdf-40fb-8036-c1e2fa0c3867 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.142724416Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:031e0591e01f99311468a0c3e22881b3cf0edd08d8a55a91d19a4315e7ca8d9b,PodSandboxId:c5c94066e41bbea108463d59bf45e8889f1472954ca6307d48e2964049065d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718624966073496172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng69p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f7df81e-d372-415e-a2ff-b6d968634f17,},Annotations:map[string]string{io.kubernetes.container.hash: 933086ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c968fd255dc050f201358575199ba46f641ea62c82eaba801bd619edd6735d,PodSandboxId:bbc8065b112213be87c70cb6a14d570495f9ae744d718bd2b506d91cf28962dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718624966081970627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shbhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a369273f-a5a9-4d41-bacf-f6ba17fecc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 45074f35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3931fd77c7bb5d762c4fac82aea53727e94c648ddcde8f559db233fd38fc7f,PodSandboxId:edb730bc446d8faf2cbcb62c29a82dbeee124a3049598907bd22780292a4b12c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718624962282100900,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a82211bd8
705a84cc888ee5f3b987ec,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9977fa88b78c127b41c1daa401e19fe446949b30cb24260553cd3d06b4f24447,PodSandboxId:de516574b62038394a5d52a38e9209e6f871554b210654709b4eb00dc633c2e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718624962230969412,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61ff83ba0d593e9b63e151493dfd41f,},Annotations:map[string]
string{io.kubernetes.container.hash: 3188295c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96225c5dfdfc5e3bac34bad966144bacc8fad8dc72438c99648d25763153e382,PodSandboxId:42490894c6cb3c4beb1c5eba282aa8410127a1e6c6cd46853081b69b42237ae6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718624962268582718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e84bd303c3421d855ba0c1538f215f1a,},Annotations:map[string]string{io.kubernet
es.container.hash: 9379a720,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75825922cbbd04ba3592edbc69153ac135cd007fa1e43379e53c7c8e0d24394,PodSandboxId:06a4d3c8369dfe06743b6f9c475a75eb4919563575d526d9ac235dae70718329,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718624962254307900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9f9919d4b1badc17cda6d1885676de,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47ea14b20b2545c7063c0efde0b1693c228c5b8a0c5468109e340db79cbd18b,PodSandboxId:023f6139f01f76d8e4d9cebe4583ff988bbeba58fed17e97f0079e8833995121,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718624957274482883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng69p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f7df81e-d372-415e-a2ff-b6d968634f17,},Annotations:map[string]string{io.kubernetes.container.hash: 9330
86ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbca6f4efef12044d1eda46a9d9a73e2a2f2ce5910f123ee53ca237230019480,PodSandboxId:6d4a98a98ade14890c53d334d31b7a4328e40f7f6bdbb0df593c73c433cc6eb6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718624956519118133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-shbhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a369273f-a5a9-4d41-bacf-f6ba17fecc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 45074f35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50797d5733f6a349cbb691d979717866beb3e73a560d51d4df857c969b0db3a1,PodSandboxId:528b279658ebc220b45804283b1f0c61642e9c6279106c79c8a900000eef42aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718624956704644233,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-475894,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d61ff83ba0d593e9b63e151493dfd41f,},Annotations:map[string]string{io.kubernetes.container.hash: 3188295c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d21c1e0a0850daab8094237a2bed1e59ce3f7d540d76a69c7405f5d603ef40,PodSandboxId:6e798aa6a6823a488eb09a4bbab9451b0c0ba383cb0f7138275cf2ad683f366b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718624956619024371,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-475894,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9f9919d4b1badc17cda6d1885676de,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d99078f029ec2aa0d136fb2fd2494ac10bebfa02fe3f231f166c05a1ec6665,PodSandboxId:f341ed94621e7a1aca316130fed7b36711293ea6c47ce98b3ec5f5a2efc4ba18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718624956581320129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-475894,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e84bd303c3421d855ba0c1538f215f1a,},Annotations:map[string]string{io.kubernetes.container.hash: 9379a720,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe5cbc7009bd88a153d92ce34fc2267ee234d73b03bf648be4f82592c7a4277,PodSandboxId:de2a5e08441241d2d430bdd4d3215bb340c0c816a54887f1e3636ed716d49b9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718624956350357609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9a82211bd8705a84cc888ee5f3b987ec,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5600c105-0cdf-40fb-8036-c1e2fa0c3867 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.196195229Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=14fe782d-55e9-4547-8ce6-e5403bf9213a name=/runtime.v1.RuntimeService/Version
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.196294952Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=14fe782d-55e9-4547-8ce6-e5403bf9213a name=/runtime.v1.RuntimeService/Version
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.197393679Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66c1695d-5c77-4683-a1fb-c6fd1c3a4e54 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.197810330Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624984197787577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66c1695d-5c77-4683-a1fb-c6fd1c3a4e54 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.198221506Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e5219e3-51aa-4632-8857-5ac173ee568e name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.198302106Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e5219e3-51aa-4632-8857-5ac173ee568e name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.198703276Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:031e0591e01f99311468a0c3e22881b3cf0edd08d8a55a91d19a4315e7ca8d9b,PodSandboxId:c5c94066e41bbea108463d59bf45e8889f1472954ca6307d48e2964049065d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718624966073496172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng69p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f7df81e-d372-415e-a2ff-b6d968634f17,},Annotations:map[string]string{io.kubernetes.container.hash: 933086ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c968fd255dc050f201358575199ba46f641ea62c82eaba801bd619edd6735d,PodSandboxId:bbc8065b112213be87c70cb6a14d570495f9ae744d718bd2b506d91cf28962dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718624966081970627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shbhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a369273f-a5a9-4d41-bacf-f6ba17fecc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 45074f35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3931fd77c7bb5d762c4fac82aea53727e94c648ddcde8f559db233fd38fc7f,PodSandboxId:edb730bc446d8faf2cbcb62c29a82dbeee124a3049598907bd22780292a4b12c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718624962282100900,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a82211bd8
705a84cc888ee5f3b987ec,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9977fa88b78c127b41c1daa401e19fe446949b30cb24260553cd3d06b4f24447,PodSandboxId:de516574b62038394a5d52a38e9209e6f871554b210654709b4eb00dc633c2e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718624962230969412,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61ff83ba0d593e9b63e151493dfd41f,},Annotations:map[string]
string{io.kubernetes.container.hash: 3188295c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96225c5dfdfc5e3bac34bad966144bacc8fad8dc72438c99648d25763153e382,PodSandboxId:42490894c6cb3c4beb1c5eba282aa8410127a1e6c6cd46853081b69b42237ae6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718624962268582718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e84bd303c3421d855ba0c1538f215f1a,},Annotations:map[string]string{io.kubernet
es.container.hash: 9379a720,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75825922cbbd04ba3592edbc69153ac135cd007fa1e43379e53c7c8e0d24394,PodSandboxId:06a4d3c8369dfe06743b6f9c475a75eb4919563575d526d9ac235dae70718329,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718624962254307900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9f9919d4b1badc17cda6d1885676de,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47ea14b20b2545c7063c0efde0b1693c228c5b8a0c5468109e340db79cbd18b,PodSandboxId:023f6139f01f76d8e4d9cebe4583ff988bbeba58fed17e97f0079e8833995121,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718624957274482883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng69p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f7df81e-d372-415e-a2ff-b6d968634f17,},Annotations:map[string]string{io.kubernetes.container.hash: 9330
86ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbca6f4efef12044d1eda46a9d9a73e2a2f2ce5910f123ee53ca237230019480,PodSandboxId:6d4a98a98ade14890c53d334d31b7a4328e40f7f6bdbb0df593c73c433cc6eb6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718624956519118133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-shbhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a369273f-a5a9-4d41-bacf-f6ba17fecc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 45074f35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50797d5733f6a349cbb691d979717866beb3e73a560d51d4df857c969b0db3a1,PodSandboxId:528b279658ebc220b45804283b1f0c61642e9c6279106c79c8a900000eef42aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718624956704644233,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-475894,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d61ff83ba0d593e9b63e151493dfd41f,},Annotations:map[string]string{io.kubernetes.container.hash: 3188295c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d21c1e0a0850daab8094237a2bed1e59ce3f7d540d76a69c7405f5d603ef40,PodSandboxId:6e798aa6a6823a488eb09a4bbab9451b0c0ba383cb0f7138275cf2ad683f366b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718624956619024371,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-475894,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9f9919d4b1badc17cda6d1885676de,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d99078f029ec2aa0d136fb2fd2494ac10bebfa02fe3f231f166c05a1ec6665,PodSandboxId:f341ed94621e7a1aca316130fed7b36711293ea6c47ce98b3ec5f5a2efc4ba18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718624956581320129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-475894,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e84bd303c3421d855ba0c1538f215f1a,},Annotations:map[string]string{io.kubernetes.container.hash: 9379a720,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe5cbc7009bd88a153d92ce34fc2267ee234d73b03bf648be4f82592c7a4277,PodSandboxId:de2a5e08441241d2d430bdd4d3215bb340c0c816a54887f1e3636ed716d49b9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718624956350357609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9a82211bd8705a84cc888ee5f3b987ec,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e5219e3-51aa-4632-8857-5ac173ee568e name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.245021023Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84e4c611-b90b-46ad-b8fa-f4d562491841 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.245133355Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84e4c611-b90b-46ad-b8fa-f4d562491841 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.246217101Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7c77cbb-e652-4bbd-abc4-eddf3e8ff989 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.246686800Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624984246625320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7c77cbb-e652-4bbd-abc4-eddf3e8ff989 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.247342638Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ec1980a-29a5-47ff-86fa-28a2b7e6fc45 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.247396001Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ec1980a-29a5-47ff-86fa-28a2b7e6fc45 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:44 pause-475894 crio[2763]: time="2024-06-17 11:49:44.247702840Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:031e0591e01f99311468a0c3e22881b3cf0edd08d8a55a91d19a4315e7ca8d9b,PodSandboxId:c5c94066e41bbea108463d59bf45e8889f1472954ca6307d48e2964049065d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718624966073496172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng69p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f7df81e-d372-415e-a2ff-b6d968634f17,},Annotations:map[string]string{io.kubernetes.container.hash: 933086ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c968fd255dc050f201358575199ba46f641ea62c82eaba801bd619edd6735d,PodSandboxId:bbc8065b112213be87c70cb6a14d570495f9ae744d718bd2b506d91cf28962dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718624966081970627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shbhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a369273f-a5a9-4d41-bacf-f6ba17fecc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 45074f35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3931fd77c7bb5d762c4fac82aea53727e94c648ddcde8f559db233fd38fc7f,PodSandboxId:edb730bc446d8faf2cbcb62c29a82dbeee124a3049598907bd22780292a4b12c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718624962282100900,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a82211bd8
705a84cc888ee5f3b987ec,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9977fa88b78c127b41c1daa401e19fe446949b30cb24260553cd3d06b4f24447,PodSandboxId:de516574b62038394a5d52a38e9209e6f871554b210654709b4eb00dc633c2e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718624962230969412,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61ff83ba0d593e9b63e151493dfd41f,},Annotations:map[string]
string{io.kubernetes.container.hash: 3188295c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96225c5dfdfc5e3bac34bad966144bacc8fad8dc72438c99648d25763153e382,PodSandboxId:42490894c6cb3c4beb1c5eba282aa8410127a1e6c6cd46853081b69b42237ae6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718624962268582718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e84bd303c3421d855ba0c1538f215f1a,},Annotations:map[string]string{io.kubernet
es.container.hash: 9379a720,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75825922cbbd04ba3592edbc69153ac135cd007fa1e43379e53c7c8e0d24394,PodSandboxId:06a4d3c8369dfe06743b6f9c475a75eb4919563575d526d9ac235dae70718329,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718624962254307900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9f9919d4b1badc17cda6d1885676de,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47ea14b20b2545c7063c0efde0b1693c228c5b8a0c5468109e340db79cbd18b,PodSandboxId:023f6139f01f76d8e4d9cebe4583ff988bbeba58fed17e97f0079e8833995121,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718624957274482883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng69p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f7df81e-d372-415e-a2ff-b6d968634f17,},Annotations:map[string]string{io.kubernetes.container.hash: 9330
86ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbca6f4efef12044d1eda46a9d9a73e2a2f2ce5910f123ee53ca237230019480,PodSandboxId:6d4a98a98ade14890c53d334d31b7a4328e40f7f6bdbb0df593c73c433cc6eb6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718624956519118133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-shbhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a369273f-a5a9-4d41-bacf-f6ba17fecc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 45074f35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50797d5733f6a349cbb691d979717866beb3e73a560d51d4df857c969b0db3a1,PodSandboxId:528b279658ebc220b45804283b1f0c61642e9c6279106c79c8a900000eef42aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718624956704644233,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-475894,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d61ff83ba0d593e9b63e151493dfd41f,},Annotations:map[string]string{io.kubernetes.container.hash: 3188295c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d21c1e0a0850daab8094237a2bed1e59ce3f7d540d76a69c7405f5d603ef40,PodSandboxId:6e798aa6a6823a488eb09a4bbab9451b0c0ba383cb0f7138275cf2ad683f366b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718624956619024371,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-475894,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9f9919d4b1badc17cda6d1885676de,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d99078f029ec2aa0d136fb2fd2494ac10bebfa02fe3f231f166c05a1ec6665,PodSandboxId:f341ed94621e7a1aca316130fed7b36711293ea6c47ce98b3ec5f5a2efc4ba18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718624956581320129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-475894,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e84bd303c3421d855ba0c1538f215f1a,},Annotations:map[string]string{io.kubernetes.container.hash: 9379a720,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe5cbc7009bd88a153d92ce34fc2267ee234d73b03bf648be4f82592c7a4277,PodSandboxId:de2a5e08441241d2d430bdd4d3215bb340c0c816a54887f1e3636ed716d49b9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718624956350357609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9a82211bd8705a84cc888ee5f3b987ec,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ec1980a-29a5-47ff-86fa-28a2b7e6fc45 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	00c968fd255dc       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   18 seconds ago      Running             kube-proxy                2                   bbc8065b11221       kube-proxy-shbhn
	031e0591e01f9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 seconds ago      Running             coredns                   2                   c5c94066e41bb       coredns-7db6d8ff4d-ng69p
	5d3931fd77c7b       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   22 seconds ago      Running             kube-scheduler            2                   edb730bc446d8       kube-scheduler-pause-475894
	96225c5dfdfc5       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   22 seconds ago      Running             kube-apiserver            2                   42490894c6cb3       kube-apiserver-pause-475894
	b75825922cbbd       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   22 seconds ago      Running             kube-controller-manager   2                   06a4d3c8369df       kube-controller-manager-pause-475894
	9977fa88b78c1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   22 seconds ago      Running             etcd                      2                   de516574b6203       etcd-pause-475894
	b47ea14b20b25       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   27 seconds ago      Exited              coredns                   1                   023f6139f01f7       coredns-7db6d8ff4d-ng69p
	50797d5733f6a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   27 seconds ago      Exited              etcd                      1                   528b279658ebc       etcd-pause-475894
	e7d21c1e0a085       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   27 seconds ago      Exited              kube-controller-manager   1                   6e798aa6a6823       kube-controller-manager-pause-475894
	35d99078f029e       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   27 seconds ago      Exited              kube-apiserver            1                   f341ed94621e7       kube-apiserver-pause-475894
	cbca6f4efef12       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   27 seconds ago      Exited              kube-proxy                1                   6d4a98a98ade1       kube-proxy-shbhn
	4fe5cbc7009bd       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   27 seconds ago      Exited              kube-scheduler            1                   de2a5e0844124       kube-scheduler-pause-475894
	
	
	==> coredns [031e0591e01f99311468a0c3e22881b3cf0edd08d8a55a91d19a4315e7ca8d9b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47540 - 39994 "HINFO IN 4321242531974080264.2585150755911692030. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010581765s
	
	
	==> coredns [b47ea14b20b2545c7063c0efde0b1693c228c5b8a0c5468109e340db79cbd18b] <==
	
	
	==> describe nodes <==
	Name:               pause-475894
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-475894
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=pause-475894
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T11_48_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:48:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-475894
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:49:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:49:25 +0000   Mon, 17 Jun 2024 11:48:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:49:25 +0000   Mon, 17 Jun 2024 11:48:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:49:25 +0000   Mon, 17 Jun 2024 11:48:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:49:25 +0000   Mon, 17 Jun 2024 11:48:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.122
	  Hostname:    pause-475894
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 73c2c9cb01d547cbb39b5000396f7b91
	  System UUID:                73c2c9cb-01d5-47cb-b39b-5000396f7b91
	  Boot ID:                    530886ed-e33c-4f3d-bba6-2b05da3377ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-ng69p                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     65s
	  kube-system                 etcd-pause-475894                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         81s
	  kube-system                 kube-apiserver-pause-475894             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-controller-manager-pause-475894    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-proxy-shbhn                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kube-scheduler-pause-475894             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 62s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  Starting                 88s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  87s (x8 over 88s)  kubelet          Node pause-475894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s (x8 over 88s)  kubelet          Node pause-475894 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s (x7 over 88s)  kubelet          Node pause-475894 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    81s                kubelet          Node pause-475894 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  81s                kubelet          Node pause-475894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     81s                kubelet          Node pause-475894 status is now: NodeHasSufficientPID
	  Normal  Starting                 81s                kubelet          Starting kubelet.
	  Normal  NodeReady                80s                kubelet          Node pause-475894 status is now: NodeReady
	  Normal  RegisteredNode           68s                node-controller  Node pause-475894 event: Registered Node pause-475894 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-475894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-475894 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-475894 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                 node-controller  Node pause-475894 event: Registered Node pause-475894 in Controller
	
	
	==> dmesg <==
	[Jun17 11:48] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.071506] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064855] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.222619] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.147372] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.396132] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.809329] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.058867] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.015204] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.090521] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.510106] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.083381] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.385974] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	[  +0.109515] kauditd_printk_skb: 21 callbacks suppressed
	[ +13.432397] kauditd_printk_skb: 67 callbacks suppressed
	[Jun17 11:49] systemd-fstab-generator[2214]: Ignoring "noauto" option for root device
	[  +0.201260] systemd-fstab-generator[2255]: Ignoring "noauto" option for root device
	[  +0.412117] systemd-fstab-generator[2460]: Ignoring "noauto" option for root device
	[  +0.282970] systemd-fstab-generator[2586]: Ignoring "noauto" option for root device
	[  +0.517199] systemd-fstab-generator[2720]: Ignoring "noauto" option for root device
	[  +1.512467] systemd-fstab-generator[3323]: Ignoring "noauto" option for root device
	[  +2.600293] systemd-fstab-generator[3449]: Ignoring "noauto" option for root device
	[  +0.085052] kauditd_printk_skb: 244 callbacks suppressed
	[ +16.806012] kauditd_printk_skb: 50 callbacks suppressed
	[  +1.939309] systemd-fstab-generator[3893]: Ignoring "noauto" option for root device
	
	
	==> etcd [50797d5733f6a349cbb691d979717866beb3e73a560d51d4df857c969b0db3a1] <==
	{"level":"warn","ts":"2024-06-17T11:49:17.553814Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-17T11:49:17.553901Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.50.122:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.50.122:2380","--initial-cluster=pause-475894=https://192.168.50.122:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.50.122:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.50.122:2380","--name=pause-475894","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trus
ted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-06-17T11:49:17.555151Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-06-17T11:49:17.55521Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-17T11:49:17.555267Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.50.122:2380"]}
	{"level":"info","ts":"2024-06-17T11:49:17.555317Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-17T11:49:17.559158Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.122:2379"]}
	{"level":"info","ts":"2024-06-17T11:49:17.559288Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-475894","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.50.122:2380"],"listen-peer-urls":["https://192.168.50.122:2380"],"advertise-client-urls":["https://192.168.50.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cl
uster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	
	
	==> etcd [9977fa88b78c127b41c1daa401e19fe446949b30cb24260553cd3d06b4f24447] <==
	{"level":"info","ts":"2024-06-17T11:49:22.661315Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-17T11:49:22.66133Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-17T11:49:22.661799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f5149f998c21bd4e switched to configuration voters=(17659915520656391502)"}
	{"level":"info","ts":"2024-06-17T11:49:22.661905Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8bbd705bc3c15469","local-member-id":"f5149f998c21bd4e","added-peer-id":"f5149f998c21bd4e","added-peer-peer-urls":["https://192.168.50.122:2380"]}
	{"level":"info","ts":"2024-06-17T11:49:22.662037Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8bbd705bc3c15469","local-member-id":"f5149f998c21bd4e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:49:22.66209Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:49:22.665776Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-17T11:49:22.666065Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f5149f998c21bd4e","initial-advertise-peer-urls":["https://192.168.50.122:2380"],"listen-peer-urls":["https://192.168.50.122:2380"],"advertise-client-urls":["https://192.168.50.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-17T11:49:22.666114Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-17T11:49:22.666348Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.122:2380"}
	{"level":"info","ts":"2024-06-17T11:49:22.666389Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.122:2380"}
	{"level":"info","ts":"2024-06-17T11:49:24.101504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f5149f998c21bd4e is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-17T11:49:24.10161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f5149f998c21bd4e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-17T11:49:24.101679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f5149f998c21bd4e received MsgPreVoteResp from f5149f998c21bd4e at term 2"}
	{"level":"info","ts":"2024-06-17T11:49:24.10173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f5149f998c21bd4e became candidate at term 3"}
	{"level":"info","ts":"2024-06-17T11:49:24.101754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f5149f998c21bd4e received MsgVoteResp from f5149f998c21bd4e at term 3"}
	{"level":"info","ts":"2024-06-17T11:49:24.101787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f5149f998c21bd4e became leader at term 3"}
	{"level":"info","ts":"2024-06-17T11:49:24.101821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f5149f998c21bd4e elected leader f5149f998c21bd4e at term 3"}
	{"level":"info","ts":"2024-06-17T11:49:24.113682Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f5149f998c21bd4e","local-member-attributes":"{Name:pause-475894 ClientURLs:[https://192.168.50.122:2379]}","request-path":"/0/members/f5149f998c21bd4e/attributes","cluster-id":"8bbd705bc3c15469","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-17T11:49:24.113884Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T11:49:24.114274Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T11:49:24.118726Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.122:2379"}
	{"level":"info","ts":"2024-06-17T11:49:24.120506Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-17T11:49:24.132503Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-17T11:49:24.132597Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:49:44 up 1 min,  0 users,  load average: 1.90, 0.72, 0.26
	Linux pause-475894 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [35d99078f029ec2aa0d136fb2fd2494ac10bebfa02fe3f231f166c05a1ec6665] <==
	I0617 11:49:17.197668       1 options.go:221] external host was not specified, using 192.168.50.122
	I0617 11:49:17.198616       1 server.go:148] Version: v1.30.1
	I0617 11:49:17.198699       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [96225c5dfdfc5e3bac34bad966144bacc8fad8dc72438c99648d25763153e382] <==
	I0617 11:49:25.643534       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0617 11:49:25.659828       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0617 11:49:25.659917       1 policy_source.go:224] refreshing policies
	I0617 11:49:25.659854       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0617 11:49:25.676085       1 shared_informer.go:320] Caches are synced for configmaps
	I0617 11:49:25.676203       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0617 11:49:25.679940       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0617 11:49:25.680943       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0617 11:49:25.680978       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0617 11:49:25.685839       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0617 11:49:25.687014       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0617 11:49:25.699031       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0617 11:49:25.699113       1 aggregator.go:165] initial CRD sync complete...
	I0617 11:49:25.699213       1 autoregister_controller.go:141] Starting autoregister controller
	I0617 11:49:25.699237       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0617 11:49:25.699259       1 cache.go:39] Caches are synced for autoregister controller
	I0617 11:49:25.714390       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0617 11:49:26.483944       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0617 11:49:27.055081       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0617 11:49:27.074495       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0617 11:49:27.118640       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0617 11:49:27.154504       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0617 11:49:27.169137       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0617 11:49:38.374406       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0617 11:49:38.449007       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [b75825922cbbd04ba3592edbc69153ac135cd007fa1e43379e53c7c8e0d24394] <==
	I0617 11:49:38.393576       1 shared_informer.go:320] Caches are synced for HPA
	I0617 11:49:38.394332       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0617 11:49:38.398624       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0617 11:49:38.406797       1 shared_informer.go:320] Caches are synced for deployment
	I0617 11:49:38.416190       1 shared_informer.go:320] Caches are synced for PVC protection
	I0617 11:49:38.419595       1 shared_informer.go:320] Caches are synced for persistent volume
	I0617 11:49:38.420855       1 shared_informer.go:320] Caches are synced for crt configmap
	I0617 11:49:38.423725       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0617 11:49:38.423831       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0617 11:49:38.423957       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0617 11:49:38.424036       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0617 11:49:38.427075       1 shared_informer.go:320] Caches are synced for job
	I0617 11:49:38.438987       1 shared_informer.go:320] Caches are synced for endpoint
	I0617 11:49:38.445558       1 shared_informer.go:320] Caches are synced for stateful set
	I0617 11:49:38.458021       1 shared_informer.go:320] Caches are synced for taint
	I0617 11:49:38.458167       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0617 11:49:38.458273       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-475894"
	I0617 11:49:38.458330       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0617 11:49:38.531212       1 shared_informer.go:320] Caches are synced for disruption
	I0617 11:49:38.593491       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0617 11:49:38.600483       1 shared_informer.go:320] Caches are synced for resource quota
	I0617 11:49:38.612830       1 shared_informer.go:320] Caches are synced for resource quota
	I0617 11:49:39.004165       1 shared_informer.go:320] Caches are synced for garbage collector
	I0617 11:49:39.004216       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0617 11:49:39.044865       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [e7d21c1e0a0850daab8094237a2bed1e59ce3f7d540d76a69c7405f5d603ef40] <==
	
	
	==> kube-proxy [00c968fd255dc050f201358575199ba46f641ea62c82eaba801bd619edd6735d] <==
	I0617 11:49:26.265150       1 server_linux.go:69] "Using iptables proxy"
	I0617 11:49:26.298638       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.122"]
	I0617 11:49:26.353075       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0617 11:49:26.353127       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0617 11:49:26.353143       1 server_linux.go:165] "Using iptables Proxier"
	I0617 11:49:26.355944       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 11:49:26.357037       1 server.go:872] "Version info" version="v1.30.1"
	I0617 11:49:26.357087       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:49:26.359037       1 config.go:192] "Starting service config controller"
	I0617 11:49:26.359074       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 11:49:26.359095       1 config.go:101] "Starting endpoint slice config controller"
	I0617 11:49:26.359099       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0617 11:49:26.359553       1 config.go:319] "Starting node config controller"
	I0617 11:49:26.359577       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 11:49:26.459910       1 shared_informer.go:320] Caches are synced for node config
	I0617 11:49:26.459956       1 shared_informer.go:320] Caches are synced for service config
	I0617 11:49:26.460009       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [cbca6f4efef12044d1eda46a9d9a73e2a2f2ce5910f123ee53ca237230019480] <==
	
	
	==> kube-scheduler [4fe5cbc7009bd88a153d92ce34fc2267ee234d73b03bf648be4f82592c7a4277] <==
	
	
	==> kube-scheduler [5d3931fd77c7bb5d762c4fac82aea53727e94c648ddcde8f559db233fd38fc7f] <==
	I0617 11:49:23.971680       1 serving.go:380] Generated self-signed cert in-memory
	W0617 11:49:25.593898       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0617 11:49:25.593983       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 11:49:25.594011       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0617 11:49:25.594040       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0617 11:49:25.632599       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0617 11:49:25.632724       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:49:25.634487       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0617 11:49:25.634619       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0617 11:49:25.634504       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0617 11:49:25.634586       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0617 11:49:25.735752       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 17 11:49:21 pause-475894 kubelet[3456]: I0617 11:49:21.945488    3456 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d9f9919d4b1badc17cda6d1885676de-k8s-certs\") pod \"kube-controller-manager-pause-475894\" (UID: \"3d9f9919d4b1badc17cda6d1885676de\") " pod="kube-system/kube-controller-manager-pause-475894"
	Jun 17 11:49:21 pause-475894 kubelet[3456]: I0617 11:49:21.945515    3456 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d9f9919d4b1badc17cda6d1885676de-kubeconfig\") pod \"kube-controller-manager-pause-475894\" (UID: \"3d9f9919d4b1badc17cda6d1885676de\") " pod="kube-system/kube-controller-manager-pause-475894"
	Jun 17 11:49:21 pause-475894 kubelet[3456]: E0617 11:49:21.945681    3456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-475894?timeout=10s\": dial tcp 192.168.50.122:8443: connect: connection refused" interval="400ms"
	Jun 17 11:49:22 pause-475894 kubelet[3456]: I0617 11:49:22.053796    3456 kubelet_node_status.go:73] "Attempting to register node" node="pause-475894"
	Jun 17 11:49:22 pause-475894 kubelet[3456]: E0617 11:49:22.054700    3456 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.122:8443: connect: connection refused" node="pause-475894"
	Jun 17 11:49:22 pause-475894 kubelet[3456]: I0617 11:49:22.208778    3456 scope.go:117] "RemoveContainer" containerID="50797d5733f6a349cbb691d979717866beb3e73a560d51d4df857c969b0db3a1"
	Jun 17 11:49:22 pause-475894 kubelet[3456]: I0617 11:49:22.211278    3456 scope.go:117] "RemoveContainer" containerID="e7d21c1e0a0850daab8094237a2bed1e59ce3f7d540d76a69c7405f5d603ef40"
	Jun 17 11:49:22 pause-475894 kubelet[3456]: I0617 11:49:22.211600    3456 scope.go:117] "RemoveContainer" containerID="35d99078f029ec2aa0d136fb2fd2494ac10bebfa02fe3f231f166c05a1ec6665"
	Jun 17 11:49:22 pause-475894 kubelet[3456]: I0617 11:49:22.211932    3456 scope.go:117] "RemoveContainer" containerID="4fe5cbc7009bd88a153d92ce34fc2267ee234d73b03bf648be4f82592c7a4277"
	Jun 17 11:49:22 pause-475894 kubelet[3456]: E0617 11:49:22.347184    3456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-475894?timeout=10s\": dial tcp 192.168.50.122:8443: connect: connection refused" interval="800ms"
	Jun 17 11:49:22 pause-475894 kubelet[3456]: I0617 11:49:22.456374    3456 kubelet_node_status.go:73] "Attempting to register node" node="pause-475894"
	Jun 17 11:49:22 pause-475894 kubelet[3456]: E0617 11:49:22.457045    3456 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.122:8443: connect: connection refused" node="pause-475894"
	Jun 17 11:49:23 pause-475894 kubelet[3456]: I0617 11:49:23.259124    3456 kubelet_node_status.go:73] "Attempting to register node" node="pause-475894"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.670718    3456 kubelet_node_status.go:112] "Node was previously registered" node="pause-475894"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.671300    3456 kubelet_node_status.go:76] "Successfully registered node" node="pause-475894"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.672973    3456 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.674532    3456 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.729906    3456 apiserver.go:52] "Watching apiserver"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.732895    3456 topology_manager.go:215] "Topology Admit Handler" podUID="a369273f-a5a9-4d41-bacf-f6ba17fecc7f" podNamespace="kube-system" podName="kube-proxy-shbhn"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.733118    3456 topology_manager.go:215] "Topology Admit Handler" podUID="1f7df81e-d372-415e-a2ff-b6d968634f17" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ng69p"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.740120    3456 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.806489    3456 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a369273f-a5a9-4d41-bacf-f6ba17fecc7f-lib-modules\") pod \"kube-proxy-shbhn\" (UID: \"a369273f-a5a9-4d41-bacf-f6ba17fecc7f\") " pod="kube-system/kube-proxy-shbhn"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.806542    3456 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a369273f-a5a9-4d41-bacf-f6ba17fecc7f-xtables-lock\") pod \"kube-proxy-shbhn\" (UID: \"a369273f-a5a9-4d41-bacf-f6ba17fecc7f\") " pod="kube-system/kube-proxy-shbhn"
	Jun 17 11:49:26 pause-475894 kubelet[3456]: I0617 11:49:26.034160    3456 scope.go:117] "RemoveContainer" containerID="b47ea14b20b2545c7063c0efde0b1693c228c5b8a0c5468109e340db79cbd18b"
	Jun 17 11:49:26 pause-475894 kubelet[3456]: I0617 11:49:26.034407    3456 scope.go:117] "RemoveContainer" containerID="cbca6f4efef12044d1eda46a9d9a73e2a2f2ce5910f123ee53ca237230019480"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-475894 -n pause-475894
helpers_test.go:261: (dbg) Run:  kubectl --context pause-475894 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-475894 -n pause-475894
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-475894 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-475894 logs -n 25: (1.383979201s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-253383 sudo                                | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo cat                            | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo cat                            | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo                                | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo                                | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo                                | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo cat                            | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo cat                            | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo                                | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo                                | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo                                | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo find                           | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-253383 sudo crio                           | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-253383                                     | cilium-253383             | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC | 17 Jun 24 11:47 UTC |
	| start   | -p cert-expiration-514753                            | cert-expiration-514753    | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC | 17 Jun 24 11:48 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-846787                               | NoKubernetes-846787       | jenkins | v1.33.1 | 17 Jun 24 11:47 UTC | 17 Jun 24 11:48 UTC |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p running-upgrade-869541                            | running-upgrade-869541    | jenkins | v1.33.1 | 17 Jun 24 11:48 UTC | 17 Jun 24 11:49 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-846787                               | NoKubernetes-846787       | jenkins | v1.33.1 | 17 Jun 24 11:48 UTC | 17 Jun 24 11:48 UTC |
	| start   | -p NoKubernetes-846787                               | NoKubernetes-846787       | jenkins | v1.33.1 | 17 Jun 24 11:48 UTC | 17 Jun 24 11:49 UTC |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-475894                                      | pause-475894              | jenkins | v1.33.1 | 17 Jun 24 11:48 UTC | 17 Jun 24 11:49 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-846787 sudo                          | NoKubernetes-846787       | jenkins | v1.33.1 | 17 Jun 24 11:49 UTC |                     |
	|         | systemctl is-active --quiet                          |                           |         |         |                     |                     |
	|         | service kubelet                                      |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-869541                            | running-upgrade-869541    | jenkins | v1.33.1 | 17 Jun 24 11:49 UTC | 17 Jun 24 11:49 UTC |
	| start   | -p force-systemd-flag-855883                         | force-systemd-flag-855883 | jenkins | v1.33.1 | 17 Jun 24 11:49 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-846787                               | NoKubernetes-846787       | jenkins | v1.33.1 | 17 Jun 24 11:49 UTC | 17 Jun 24 11:49 UTC |
	| start   | -p NoKubernetes-846787                               | NoKubernetes-846787       | jenkins | v1.33.1 | 17 Jun 24 11:49 UTC |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 11:49:45
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 11:49:45.256373  159704 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:49:45.256586  159704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:49:45.256589  159704 out.go:304] Setting ErrFile to fd 2...
	I0617 11:49:45.256592  159704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:49:45.256788  159704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:49:45.257275  159704 out.go:298] Setting JSON to false
	I0617 11:49:45.258247  159704 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5532,"bootTime":1718619453,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:49:45.258300  159704 start.go:139] virtualization: kvm guest
	I0617 11:49:45.260622  159704 out.go:177] * [NoKubernetes-846787] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:49:45.261898  159704 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:49:45.261969  159704 notify.go:220] Checking for updates...
	I0617 11:49:45.263180  159704 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:49:45.264428  159704 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:49:45.265724  159704 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:49:45.266978  159704 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:49:45.268159  159704 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:49:45.269871  159704 config.go:182] Loaded profile config "NoKubernetes-846787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0617 11:49:45.270318  159704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:49:45.270365  159704 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:49:45.288267  159704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I0617 11:49:45.288813  159704 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:49:45.289529  159704 main.go:141] libmachine: Using API Version  1
	I0617 11:49:45.289566  159704 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:49:45.290007  159704 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:49:45.290174  159704 main.go:141] libmachine: (NoKubernetes-846787) Calling .DriverName
	I0617 11:49:45.290462  159704 start.go:1783] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0617 11:49:45.290482  159704 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:49:45.290898  159704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:49:45.290941  159704 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:49:45.306470  159704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46283
	I0617 11:49:45.306839  159704 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:49:45.307666  159704 main.go:141] libmachine: Using API Version  1
	I0617 11:49:45.307677  159704 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:49:45.308048  159704 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:49:45.308232  159704 main.go:141] libmachine: (NoKubernetes-846787) Calling .DriverName
	I0617 11:49:45.348762  159704 out.go:177] * Using the kvm2 driver based on existing profile
	I0617 11:49:45.349919  159704 start.go:297] selected driver: kvm2
	I0617 11:49:45.349927  159704 start.go:901] validating driver "kvm2" against &{Name:NoKubernetes-846787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v0.0.0 ClusterName:NoKubernetes-846787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.68 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:49:45.350021  159704 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:49:45.350336  159704 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:49:45.350395  159704 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 11:49:45.366888  159704 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 11:49:45.367742  159704 cni.go:84] Creating CNI manager for ""
	I0617 11:49:45.367752  159704 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:49:45.367816  159704 start.go:340] cluster config:
	{Name:NoKubernetes-846787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-846787 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.68 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:49:45.367922  159704 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:49:45.369572  159704 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-846787
	
	
	==> CRI-O <==
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.148559991Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624986148536661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ecc7ff3-472f-450a-9aa1-3c964ddf3f50 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.149177492Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f465d8a7-889b-4fff-aeb2-1ea69d1a45c9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.149227379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f465d8a7-889b-4fff-aeb2-1ea69d1a45c9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.149539300Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:031e0591e01f99311468a0c3e22881b3cf0edd08d8a55a91d19a4315e7ca8d9b,PodSandboxId:c5c94066e41bbea108463d59bf45e8889f1472954ca6307d48e2964049065d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718624966073496172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng69p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f7df81e-d372-415e-a2ff-b6d968634f17,},Annotations:map[string]string{io.kubernetes.container.hash: 933086ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c968fd255dc050f201358575199ba46f641ea62c82eaba801bd619edd6735d,PodSandboxId:bbc8065b112213be87c70cb6a14d570495f9ae744d718bd2b506d91cf28962dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718624966081970627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shbhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a369273f-a5a9-4d41-bacf-f6ba17fecc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 45074f35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3931fd77c7bb5d762c4fac82aea53727e94c648ddcde8f559db233fd38fc7f,PodSandboxId:edb730bc446d8faf2cbcb62c29a82dbeee124a3049598907bd22780292a4b12c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718624962282100900,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a82211bd8
705a84cc888ee5f3b987ec,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9977fa88b78c127b41c1daa401e19fe446949b30cb24260553cd3d06b4f24447,PodSandboxId:de516574b62038394a5d52a38e9209e6f871554b210654709b4eb00dc633c2e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718624962230969412,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61ff83ba0d593e9b63e151493dfd41f,},Annotations:map[string]
string{io.kubernetes.container.hash: 3188295c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96225c5dfdfc5e3bac34bad966144bacc8fad8dc72438c99648d25763153e382,PodSandboxId:42490894c6cb3c4beb1c5eba282aa8410127a1e6c6cd46853081b69b42237ae6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718624962268582718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e84bd303c3421d855ba0c1538f215f1a,},Annotations:map[string]string{io.kubernet
es.container.hash: 9379a720,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75825922cbbd04ba3592edbc69153ac135cd007fa1e43379e53c7c8e0d24394,PodSandboxId:06a4d3c8369dfe06743b6f9c475a75eb4919563575d526d9ac235dae70718329,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718624962254307900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9f9919d4b1badc17cda6d1885676de,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47ea14b20b2545c7063c0efde0b1693c228c5b8a0c5468109e340db79cbd18b,PodSandboxId:023f6139f01f76d8e4d9cebe4583ff988bbeba58fed17e97f0079e8833995121,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718624957274482883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng69p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f7df81e-d372-415e-a2ff-b6d968634f17,},Annotations:map[string]string{io.kubernetes.container.hash: 9330
86ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbca6f4efef12044d1eda46a9d9a73e2a2f2ce5910f123ee53ca237230019480,PodSandboxId:6d4a98a98ade14890c53d334d31b7a4328e40f7f6bdbb0df593c73c433cc6eb6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718624956519118133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-shbhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a369273f-a5a9-4d41-bacf-f6ba17fecc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 45074f35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50797d5733f6a349cbb691d979717866beb3e73a560d51d4df857c969b0db3a1,PodSandboxId:528b279658ebc220b45804283b1f0c61642e9c6279106c79c8a900000eef42aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718624956704644233,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-475894,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d61ff83ba0d593e9b63e151493dfd41f,},Annotations:map[string]string{io.kubernetes.container.hash: 3188295c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d21c1e0a0850daab8094237a2bed1e59ce3f7d540d76a69c7405f5d603ef40,PodSandboxId:6e798aa6a6823a488eb09a4bbab9451b0c0ba383cb0f7138275cf2ad683f366b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718624956619024371,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-475894,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9f9919d4b1badc17cda6d1885676de,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d99078f029ec2aa0d136fb2fd2494ac10bebfa02fe3f231f166c05a1ec6665,PodSandboxId:f341ed94621e7a1aca316130fed7b36711293ea6c47ce98b3ec5f5a2efc4ba18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718624956581320129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-475894,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e84bd303c3421d855ba0c1538f215f1a,},Annotations:map[string]string{io.kubernetes.container.hash: 9379a720,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe5cbc7009bd88a153d92ce34fc2267ee234d73b03bf648be4f82592c7a4277,PodSandboxId:de2a5e08441241d2d430bdd4d3215bb340c0c816a54887f1e3636ed716d49b9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718624956350357609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9a82211bd8705a84cc888ee5f3b987ec,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f465d8a7-889b-4fff-aeb2-1ea69d1a45c9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.205601501Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6e4eae2-30d2-4287-91e7-f2d4c2d07285 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.205701112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6e4eae2-30d2-4287-91e7-f2d4c2d07285 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.207574617Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6399b148-afae-450c-af1a-59fd4a8c6b0a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.208070731Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624986208041581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6399b148-afae-450c-af1a-59fd4a8c6b0a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.209054854Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca49ec5f-772b-41d2-b6dc-779de4ccf8d7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.209127764Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca49ec5f-772b-41d2-b6dc-779de4ccf8d7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.209527588Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:031e0591e01f99311468a0c3e22881b3cf0edd08d8a55a91d19a4315e7ca8d9b,PodSandboxId:c5c94066e41bbea108463d59bf45e8889f1472954ca6307d48e2964049065d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718624966073496172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng69p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f7df81e-d372-415e-a2ff-b6d968634f17,},Annotations:map[string]string{io.kubernetes.container.hash: 933086ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c968fd255dc050f201358575199ba46f641ea62c82eaba801bd619edd6735d,PodSandboxId:bbc8065b112213be87c70cb6a14d570495f9ae744d718bd2b506d91cf28962dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718624966081970627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shbhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a369273f-a5a9-4d41-bacf-f6ba17fecc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 45074f35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3931fd77c7bb5d762c4fac82aea53727e94c648ddcde8f559db233fd38fc7f,PodSandboxId:edb730bc446d8faf2cbcb62c29a82dbeee124a3049598907bd22780292a4b12c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718624962282100900,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a82211bd8
705a84cc888ee5f3b987ec,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9977fa88b78c127b41c1daa401e19fe446949b30cb24260553cd3d06b4f24447,PodSandboxId:de516574b62038394a5d52a38e9209e6f871554b210654709b4eb00dc633c2e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718624962230969412,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61ff83ba0d593e9b63e151493dfd41f,},Annotations:map[string]
string{io.kubernetes.container.hash: 3188295c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96225c5dfdfc5e3bac34bad966144bacc8fad8dc72438c99648d25763153e382,PodSandboxId:42490894c6cb3c4beb1c5eba282aa8410127a1e6c6cd46853081b69b42237ae6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718624962268582718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e84bd303c3421d855ba0c1538f215f1a,},Annotations:map[string]string{io.kubernet
es.container.hash: 9379a720,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75825922cbbd04ba3592edbc69153ac135cd007fa1e43379e53c7c8e0d24394,PodSandboxId:06a4d3c8369dfe06743b6f9c475a75eb4919563575d526d9ac235dae70718329,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718624962254307900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9f9919d4b1badc17cda6d1885676de,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47ea14b20b2545c7063c0efde0b1693c228c5b8a0c5468109e340db79cbd18b,PodSandboxId:023f6139f01f76d8e4d9cebe4583ff988bbeba58fed17e97f0079e8833995121,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718624957274482883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng69p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f7df81e-d372-415e-a2ff-b6d968634f17,},Annotations:map[string]string{io.kubernetes.container.hash: 9330
86ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbca6f4efef12044d1eda46a9d9a73e2a2f2ce5910f123ee53ca237230019480,PodSandboxId:6d4a98a98ade14890c53d334d31b7a4328e40f7f6bdbb0df593c73c433cc6eb6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718624956519118133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-shbhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a369273f-a5a9-4d41-bacf-f6ba17fecc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 45074f35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50797d5733f6a349cbb691d979717866beb3e73a560d51d4df857c969b0db3a1,PodSandboxId:528b279658ebc220b45804283b1f0c61642e9c6279106c79c8a900000eef42aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718624956704644233,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-475894,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d61ff83ba0d593e9b63e151493dfd41f,},Annotations:map[string]string{io.kubernetes.container.hash: 3188295c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d21c1e0a0850daab8094237a2bed1e59ce3f7d540d76a69c7405f5d603ef40,PodSandboxId:6e798aa6a6823a488eb09a4bbab9451b0c0ba383cb0f7138275cf2ad683f366b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718624956619024371,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-475894,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9f9919d4b1badc17cda6d1885676de,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d99078f029ec2aa0d136fb2fd2494ac10bebfa02fe3f231f166c05a1ec6665,PodSandboxId:f341ed94621e7a1aca316130fed7b36711293ea6c47ce98b3ec5f5a2efc4ba18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718624956581320129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-475894,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e84bd303c3421d855ba0c1538f215f1a,},Annotations:map[string]string{io.kubernetes.container.hash: 9379a720,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe5cbc7009bd88a153d92ce34fc2267ee234d73b03bf648be4f82592c7a4277,PodSandboxId:de2a5e08441241d2d430bdd4d3215bb340c0c816a54887f1e3636ed716d49b9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718624956350357609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9a82211bd8705a84cc888ee5f3b987ec,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca49ec5f-772b-41d2-b6dc-779de4ccf8d7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.258489742Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc8aa9a4-5c6a-4c8b-9e02-d4c37b9c490c name=/runtime.v1.RuntimeService/Version
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.258650370Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc8aa9a4-5c6a-4c8b-9e02-d4c37b9c490c name=/runtime.v1.RuntimeService/Version
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.259975064Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1c46c57d-8948-41af-9a4d-b8e2824b756b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.260342243Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624986260319956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c46c57d-8948-41af-9a4d-b8e2824b756b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.261154996Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37a6a00e-1e11-47a6-aaca-d16be7f3371e name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.261241580Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37a6a00e-1e11-47a6-aaca-d16be7f3371e name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.261558581Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:031e0591e01f99311468a0c3e22881b3cf0edd08d8a55a91d19a4315e7ca8d9b,PodSandboxId:c5c94066e41bbea108463d59bf45e8889f1472954ca6307d48e2964049065d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718624966073496172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng69p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f7df81e-d372-415e-a2ff-b6d968634f17,},Annotations:map[string]string{io.kubernetes.container.hash: 933086ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c968fd255dc050f201358575199ba46f641ea62c82eaba801bd619edd6735d,PodSandboxId:bbc8065b112213be87c70cb6a14d570495f9ae744d718bd2b506d91cf28962dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718624966081970627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shbhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a369273f-a5a9-4d41-bacf-f6ba17fecc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 45074f35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3931fd77c7bb5d762c4fac82aea53727e94c648ddcde8f559db233fd38fc7f,PodSandboxId:edb730bc446d8faf2cbcb62c29a82dbeee124a3049598907bd22780292a4b12c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718624962282100900,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a82211bd8
705a84cc888ee5f3b987ec,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9977fa88b78c127b41c1daa401e19fe446949b30cb24260553cd3d06b4f24447,PodSandboxId:de516574b62038394a5d52a38e9209e6f871554b210654709b4eb00dc633c2e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718624962230969412,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61ff83ba0d593e9b63e151493dfd41f,},Annotations:map[string]
string{io.kubernetes.container.hash: 3188295c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96225c5dfdfc5e3bac34bad966144bacc8fad8dc72438c99648d25763153e382,PodSandboxId:42490894c6cb3c4beb1c5eba282aa8410127a1e6c6cd46853081b69b42237ae6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718624962268582718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e84bd303c3421d855ba0c1538f215f1a,},Annotations:map[string]string{io.kubernet
es.container.hash: 9379a720,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75825922cbbd04ba3592edbc69153ac135cd007fa1e43379e53c7c8e0d24394,PodSandboxId:06a4d3c8369dfe06743b6f9c475a75eb4919563575d526d9ac235dae70718329,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718624962254307900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9f9919d4b1badc17cda6d1885676de,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47ea14b20b2545c7063c0efde0b1693c228c5b8a0c5468109e340db79cbd18b,PodSandboxId:023f6139f01f76d8e4d9cebe4583ff988bbeba58fed17e97f0079e8833995121,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718624957274482883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng69p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f7df81e-d372-415e-a2ff-b6d968634f17,},Annotations:map[string]string{io.kubernetes.container.hash: 9330
86ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbca6f4efef12044d1eda46a9d9a73e2a2f2ce5910f123ee53ca237230019480,PodSandboxId:6d4a98a98ade14890c53d334d31b7a4328e40f7f6bdbb0df593c73c433cc6eb6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718624956519118133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-shbhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a369273f-a5a9-4d41-bacf-f6ba17fecc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 45074f35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50797d5733f6a349cbb691d979717866beb3e73a560d51d4df857c969b0db3a1,PodSandboxId:528b279658ebc220b45804283b1f0c61642e9c6279106c79c8a900000eef42aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718624956704644233,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-475894,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d61ff83ba0d593e9b63e151493dfd41f,},Annotations:map[string]string{io.kubernetes.container.hash: 3188295c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d21c1e0a0850daab8094237a2bed1e59ce3f7d540d76a69c7405f5d603ef40,PodSandboxId:6e798aa6a6823a488eb09a4bbab9451b0c0ba383cb0f7138275cf2ad683f366b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718624956619024371,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-475894,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9f9919d4b1badc17cda6d1885676de,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d99078f029ec2aa0d136fb2fd2494ac10bebfa02fe3f231f166c05a1ec6665,PodSandboxId:f341ed94621e7a1aca316130fed7b36711293ea6c47ce98b3ec5f5a2efc4ba18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718624956581320129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-475894,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e84bd303c3421d855ba0c1538f215f1a,},Annotations:map[string]string{io.kubernetes.container.hash: 9379a720,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe5cbc7009bd88a153d92ce34fc2267ee234d73b03bf648be4f82592c7a4277,PodSandboxId:de2a5e08441241d2d430bdd4d3215bb340c0c816a54887f1e3636ed716d49b9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718624956350357609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9a82211bd8705a84cc888ee5f3b987ec,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37a6a00e-1e11-47a6-aaca-d16be7f3371e name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.307607612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=52e59602-f8b2-44d4-ac7b-134e9a050dd7 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.307685427Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=52e59602-f8b2-44d4-ac7b-134e9a050dd7 name=/runtime.v1.RuntimeService/Version
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.310390622Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8fddf5f5-7f7e-45e4-bef6-418633725352 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.311011628Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718624986310978072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8fddf5f5-7f7e-45e4-bef6-418633725352 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.311794242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8605670-b583-466f-a74d-717755503d3c name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.311874422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8605670-b583-466f-a74d-717755503d3c name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 11:49:46 pause-475894 crio[2763]: time="2024-06-17 11:49:46.312111940Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:031e0591e01f99311468a0c3e22881b3cf0edd08d8a55a91d19a4315e7ca8d9b,PodSandboxId:c5c94066e41bbea108463d59bf45e8889f1472954ca6307d48e2964049065d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718624966073496172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng69p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f7df81e-d372-415e-a2ff-b6d968634f17,},Annotations:map[string]string{io.kubernetes.container.hash: 933086ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c968fd255dc050f201358575199ba46f641ea62c82eaba801bd619edd6735d,PodSandboxId:bbc8065b112213be87c70cb6a14d570495f9ae744d718bd2b506d91cf28962dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718624966081970627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shbhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a369273f-a5a9-4d41-bacf-f6ba17fecc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 45074f35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3931fd77c7bb5d762c4fac82aea53727e94c648ddcde8f559db233fd38fc7f,PodSandboxId:edb730bc446d8faf2cbcb62c29a82dbeee124a3049598907bd22780292a4b12c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718624962282100900,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a82211bd8
705a84cc888ee5f3b987ec,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9977fa88b78c127b41c1daa401e19fe446949b30cb24260553cd3d06b4f24447,PodSandboxId:de516574b62038394a5d52a38e9209e6f871554b210654709b4eb00dc633c2e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718624962230969412,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61ff83ba0d593e9b63e151493dfd41f,},Annotations:map[string]
string{io.kubernetes.container.hash: 3188295c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96225c5dfdfc5e3bac34bad966144bacc8fad8dc72438c99648d25763153e382,PodSandboxId:42490894c6cb3c4beb1c5eba282aa8410127a1e6c6cd46853081b69b42237ae6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718624962268582718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e84bd303c3421d855ba0c1538f215f1a,},Annotations:map[string]string{io.kubernet
es.container.hash: 9379a720,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75825922cbbd04ba3592edbc69153ac135cd007fa1e43379e53c7c8e0d24394,PodSandboxId:06a4d3c8369dfe06743b6f9c475a75eb4919563575d526d9ac235dae70718329,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718624962254307900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9f9919d4b1badc17cda6d1885676de,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47ea14b20b2545c7063c0efde0b1693c228c5b8a0c5468109e340db79cbd18b,PodSandboxId:023f6139f01f76d8e4d9cebe4583ff988bbeba58fed17e97f0079e8833995121,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718624957274482883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng69p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f7df81e-d372-415e-a2ff-b6d968634f17,},Annotations:map[string]string{io.kubernetes.container.hash: 9330
86ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbca6f4efef12044d1eda46a9d9a73e2a2f2ce5910f123ee53ca237230019480,PodSandboxId:6d4a98a98ade14890c53d334d31b7a4328e40f7f6bdbb0df593c73c433cc6eb6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718624956519118133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-shbhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a369273f-a5a9-4d41-bacf-f6ba17fecc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 45074f35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50797d5733f6a349cbb691d979717866beb3e73a560d51d4df857c969b0db3a1,PodSandboxId:528b279658ebc220b45804283b1f0c61642e9c6279106c79c8a900000eef42aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718624956704644233,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-475894,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d61ff83ba0d593e9b63e151493dfd41f,},Annotations:map[string]string{io.kubernetes.container.hash: 3188295c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d21c1e0a0850daab8094237a2bed1e59ce3f7d540d76a69c7405f5d603ef40,PodSandboxId:6e798aa6a6823a488eb09a4bbab9451b0c0ba383cb0f7138275cf2ad683f366b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718624956619024371,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-475894,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9f9919d4b1badc17cda6d1885676de,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d99078f029ec2aa0d136fb2fd2494ac10bebfa02fe3f231f166c05a1ec6665,PodSandboxId:f341ed94621e7a1aca316130fed7b36711293ea6c47ce98b3ec5f5a2efc4ba18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718624956581320129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-475894,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e84bd303c3421d855ba0c1538f215f1a,},Annotations:map[string]string{io.kubernetes.container.hash: 9379a720,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe5cbc7009bd88a153d92ce34fc2267ee234d73b03bf648be4f82592c7a4277,PodSandboxId:de2a5e08441241d2d430bdd4d3215bb340c0c816a54887f1e3636ed716d49b9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718624956350357609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-475894,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9a82211bd8705a84cc888ee5f3b987ec,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8605670-b583-466f-a74d-717755503d3c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	00c968fd255dc       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   20 seconds ago      Running             kube-proxy                2                   bbc8065b11221       kube-proxy-shbhn
	031e0591e01f9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   20 seconds ago      Running             coredns                   2                   c5c94066e41bb       coredns-7db6d8ff4d-ng69p
	5d3931fd77c7b       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   24 seconds ago      Running             kube-scheduler            2                   edb730bc446d8       kube-scheduler-pause-475894
	96225c5dfdfc5       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   24 seconds ago      Running             kube-apiserver            2                   42490894c6cb3       kube-apiserver-pause-475894
	b75825922cbbd       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   24 seconds ago      Running             kube-controller-manager   2                   06a4d3c8369df       kube-controller-manager-pause-475894
	9977fa88b78c1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   24 seconds ago      Running             etcd                      2                   de516574b6203       etcd-pause-475894
	b47ea14b20b25       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   29 seconds ago      Exited              coredns                   1                   023f6139f01f7       coredns-7db6d8ff4d-ng69p
	50797d5733f6a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   29 seconds ago      Exited              etcd                      1                   528b279658ebc       etcd-pause-475894
	e7d21c1e0a085       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   29 seconds ago      Exited              kube-controller-manager   1                   6e798aa6a6823       kube-controller-manager-pause-475894
	35d99078f029e       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   29 seconds ago      Exited              kube-apiserver            1                   f341ed94621e7       kube-apiserver-pause-475894
	cbca6f4efef12       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   29 seconds ago      Exited              kube-proxy                1                   6d4a98a98ade1       kube-proxy-shbhn
	4fe5cbc7009bd       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   30 seconds ago      Exited              kube-scheduler            1                   de2a5e0844124       kube-scheduler-pause-475894
	
	
	==> coredns [031e0591e01f99311468a0c3e22881b3cf0edd08d8a55a91d19a4315e7ca8d9b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47540 - 39994 "HINFO IN 4321242531974080264.2585150755911692030. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010581765s
	
	
	==> coredns [b47ea14b20b2545c7063c0efde0b1693c228c5b8a0c5468109e340db79cbd18b] <==
	
	
	==> describe nodes <==
	Name:               pause-475894
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-475894
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=pause-475894
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T11_48_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:48:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-475894
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:49:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:49:25 +0000   Mon, 17 Jun 2024 11:48:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:49:25 +0000   Mon, 17 Jun 2024 11:48:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:49:25 +0000   Mon, 17 Jun 2024 11:48:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:49:25 +0000   Mon, 17 Jun 2024 11:48:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.122
	  Hostname:    pause-475894
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 73c2c9cb01d547cbb39b5000396f7b91
	  System UUID:                73c2c9cb-01d5-47cb-b39b-5000396f7b91
	  Boot ID:                    530886ed-e33c-4f3d-bba6-2b05da3377ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-ng69p                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     67s
	  kube-system                 etcd-pause-475894                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         83s
	  kube-system                 kube-apiserver-pause-475894             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-controller-manager-pause-475894    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-proxy-shbhn                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-scheduler-pause-475894             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 64s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  Starting                 90s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  89s (x8 over 90s)  kubelet          Node pause-475894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s (x8 over 90s)  kubelet          Node pause-475894 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s (x7 over 90s)  kubelet          Node pause-475894 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    83s                kubelet          Node pause-475894 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  83s                kubelet          Node pause-475894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     83s                kubelet          Node pause-475894 status is now: NodeHasSufficientPID
	  Normal  Starting                 83s                kubelet          Starting kubelet.
	  Normal  NodeReady                82s                kubelet          Node pause-475894 status is now: NodeReady
	  Normal  RegisteredNode           70s                node-controller  Node pause-475894 event: Registered Node pause-475894 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-475894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-475894 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-475894 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                 node-controller  Node pause-475894 event: Registered Node pause-475894 in Controller
	
	
	==> dmesg <==
	[Jun17 11:48] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.071506] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064855] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.222619] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.147372] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.396132] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.809329] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.058867] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.015204] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.090521] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.510106] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.083381] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.385974] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	[  +0.109515] kauditd_printk_skb: 21 callbacks suppressed
	[ +13.432397] kauditd_printk_skb: 67 callbacks suppressed
	[Jun17 11:49] systemd-fstab-generator[2214]: Ignoring "noauto" option for root device
	[  +0.201260] systemd-fstab-generator[2255]: Ignoring "noauto" option for root device
	[  +0.412117] systemd-fstab-generator[2460]: Ignoring "noauto" option for root device
	[  +0.282970] systemd-fstab-generator[2586]: Ignoring "noauto" option for root device
	[  +0.517199] systemd-fstab-generator[2720]: Ignoring "noauto" option for root device
	[  +1.512467] systemd-fstab-generator[3323]: Ignoring "noauto" option for root device
	[  +2.600293] systemd-fstab-generator[3449]: Ignoring "noauto" option for root device
	[  +0.085052] kauditd_printk_skb: 244 callbacks suppressed
	[ +16.806012] kauditd_printk_skb: 50 callbacks suppressed
	[  +1.939309] systemd-fstab-generator[3893]: Ignoring "noauto" option for root device
	
	
	==> etcd [50797d5733f6a349cbb691d979717866beb3e73a560d51d4df857c969b0db3a1] <==
	{"level":"warn","ts":"2024-06-17T11:49:17.553814Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-17T11:49:17.553901Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.50.122:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.50.122:2380","--initial-cluster=pause-475894=https://192.168.50.122:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.50.122:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.50.122:2380","--name=pause-475894","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trus
ted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-06-17T11:49:17.555151Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-06-17T11:49:17.55521Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-17T11:49:17.555267Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.50.122:2380"]}
	{"level":"info","ts":"2024-06-17T11:49:17.555317Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-17T11:49:17.559158Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.122:2379"]}
	{"level":"info","ts":"2024-06-17T11:49:17.559288Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-475894","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.50.122:2380"],"listen-peer-urls":["https://192.168.50.122:2380"],"advertise-client-urls":["https://192.168.50.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cl
uster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	
	
	==> etcd [9977fa88b78c127b41c1daa401e19fe446949b30cb24260553cd3d06b4f24447] <==
	{"level":"info","ts":"2024-06-17T11:49:22.661315Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-17T11:49:22.66133Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-17T11:49:22.661799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f5149f998c21bd4e switched to configuration voters=(17659915520656391502)"}
	{"level":"info","ts":"2024-06-17T11:49:22.661905Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8bbd705bc3c15469","local-member-id":"f5149f998c21bd4e","added-peer-id":"f5149f998c21bd4e","added-peer-peer-urls":["https://192.168.50.122:2380"]}
	{"level":"info","ts":"2024-06-17T11:49:22.662037Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8bbd705bc3c15469","local-member-id":"f5149f998c21bd4e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:49:22.66209Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:49:22.665776Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-17T11:49:22.666065Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f5149f998c21bd4e","initial-advertise-peer-urls":["https://192.168.50.122:2380"],"listen-peer-urls":["https://192.168.50.122:2380"],"advertise-client-urls":["https://192.168.50.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-17T11:49:22.666114Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-17T11:49:22.666348Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.122:2380"}
	{"level":"info","ts":"2024-06-17T11:49:22.666389Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.122:2380"}
	{"level":"info","ts":"2024-06-17T11:49:24.101504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f5149f998c21bd4e is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-17T11:49:24.10161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f5149f998c21bd4e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-17T11:49:24.101679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f5149f998c21bd4e received MsgPreVoteResp from f5149f998c21bd4e at term 2"}
	{"level":"info","ts":"2024-06-17T11:49:24.10173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f5149f998c21bd4e became candidate at term 3"}
	{"level":"info","ts":"2024-06-17T11:49:24.101754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f5149f998c21bd4e received MsgVoteResp from f5149f998c21bd4e at term 3"}
	{"level":"info","ts":"2024-06-17T11:49:24.101787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f5149f998c21bd4e became leader at term 3"}
	{"level":"info","ts":"2024-06-17T11:49:24.101821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f5149f998c21bd4e elected leader f5149f998c21bd4e at term 3"}
	{"level":"info","ts":"2024-06-17T11:49:24.113682Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f5149f998c21bd4e","local-member-attributes":"{Name:pause-475894 ClientURLs:[https://192.168.50.122:2379]}","request-path":"/0/members/f5149f998c21bd4e/attributes","cluster-id":"8bbd705bc3c15469","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-17T11:49:24.113884Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T11:49:24.114274Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T11:49:24.118726Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.122:2379"}
	{"level":"info","ts":"2024-06-17T11:49:24.120506Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-17T11:49:24.132503Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-17T11:49:24.132597Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:49:46 up 2 min,  0 users,  load average: 1.99, 0.75, 0.27
	Linux pause-475894 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [35d99078f029ec2aa0d136fb2fd2494ac10bebfa02fe3f231f166c05a1ec6665] <==
	I0617 11:49:17.197668       1 options.go:221] external host was not specified, using 192.168.50.122
	I0617 11:49:17.198616       1 server.go:148] Version: v1.30.1
	I0617 11:49:17.198699       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [96225c5dfdfc5e3bac34bad966144bacc8fad8dc72438c99648d25763153e382] <==
	I0617 11:49:25.643534       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0617 11:49:25.659828       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0617 11:49:25.659917       1 policy_source.go:224] refreshing policies
	I0617 11:49:25.659854       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0617 11:49:25.676085       1 shared_informer.go:320] Caches are synced for configmaps
	I0617 11:49:25.676203       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0617 11:49:25.679940       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0617 11:49:25.680943       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0617 11:49:25.680978       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0617 11:49:25.685839       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0617 11:49:25.687014       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0617 11:49:25.699031       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0617 11:49:25.699113       1 aggregator.go:165] initial CRD sync complete...
	I0617 11:49:25.699213       1 autoregister_controller.go:141] Starting autoregister controller
	I0617 11:49:25.699237       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0617 11:49:25.699259       1 cache.go:39] Caches are synced for autoregister controller
	I0617 11:49:25.714390       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0617 11:49:26.483944       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0617 11:49:27.055081       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0617 11:49:27.074495       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0617 11:49:27.118640       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0617 11:49:27.154504       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0617 11:49:27.169137       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0617 11:49:38.374406       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0617 11:49:38.449007       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [b75825922cbbd04ba3592edbc69153ac135cd007fa1e43379e53c7c8e0d24394] <==
	I0617 11:49:38.393576       1 shared_informer.go:320] Caches are synced for HPA
	I0617 11:49:38.394332       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0617 11:49:38.398624       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0617 11:49:38.406797       1 shared_informer.go:320] Caches are synced for deployment
	I0617 11:49:38.416190       1 shared_informer.go:320] Caches are synced for PVC protection
	I0617 11:49:38.419595       1 shared_informer.go:320] Caches are synced for persistent volume
	I0617 11:49:38.420855       1 shared_informer.go:320] Caches are synced for crt configmap
	I0617 11:49:38.423725       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0617 11:49:38.423831       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0617 11:49:38.423957       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0617 11:49:38.424036       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0617 11:49:38.427075       1 shared_informer.go:320] Caches are synced for job
	I0617 11:49:38.438987       1 shared_informer.go:320] Caches are synced for endpoint
	I0617 11:49:38.445558       1 shared_informer.go:320] Caches are synced for stateful set
	I0617 11:49:38.458021       1 shared_informer.go:320] Caches are synced for taint
	I0617 11:49:38.458167       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0617 11:49:38.458273       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-475894"
	I0617 11:49:38.458330       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0617 11:49:38.531212       1 shared_informer.go:320] Caches are synced for disruption
	I0617 11:49:38.593491       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0617 11:49:38.600483       1 shared_informer.go:320] Caches are synced for resource quota
	I0617 11:49:38.612830       1 shared_informer.go:320] Caches are synced for resource quota
	I0617 11:49:39.004165       1 shared_informer.go:320] Caches are synced for garbage collector
	I0617 11:49:39.004216       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0617 11:49:39.044865       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [e7d21c1e0a0850daab8094237a2bed1e59ce3f7d540d76a69c7405f5d603ef40] <==
	
	
	==> kube-proxy [00c968fd255dc050f201358575199ba46f641ea62c82eaba801bd619edd6735d] <==
	I0617 11:49:26.265150       1 server_linux.go:69] "Using iptables proxy"
	I0617 11:49:26.298638       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.122"]
	I0617 11:49:26.353075       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0617 11:49:26.353127       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0617 11:49:26.353143       1 server_linux.go:165] "Using iptables Proxier"
	I0617 11:49:26.355944       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 11:49:26.357037       1 server.go:872] "Version info" version="v1.30.1"
	I0617 11:49:26.357087       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:49:26.359037       1 config.go:192] "Starting service config controller"
	I0617 11:49:26.359074       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 11:49:26.359095       1 config.go:101] "Starting endpoint slice config controller"
	I0617 11:49:26.359099       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0617 11:49:26.359553       1 config.go:319] "Starting node config controller"
	I0617 11:49:26.359577       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 11:49:26.459910       1 shared_informer.go:320] Caches are synced for node config
	I0617 11:49:26.459956       1 shared_informer.go:320] Caches are synced for service config
	I0617 11:49:26.460009       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [cbca6f4efef12044d1eda46a9d9a73e2a2f2ce5910f123ee53ca237230019480] <==
	
	
	==> kube-scheduler [4fe5cbc7009bd88a153d92ce34fc2267ee234d73b03bf648be4f82592c7a4277] <==
	
	
	==> kube-scheduler [5d3931fd77c7bb5d762c4fac82aea53727e94c648ddcde8f559db233fd38fc7f] <==
	I0617 11:49:23.971680       1 serving.go:380] Generated self-signed cert in-memory
	W0617 11:49:25.593898       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0617 11:49:25.593983       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 11:49:25.594011       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0617 11:49:25.594040       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0617 11:49:25.632599       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0617 11:49:25.632724       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:49:25.634487       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0617 11:49:25.634619       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0617 11:49:25.634504       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0617 11:49:25.634586       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0617 11:49:25.735752       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 17 11:49:21 pause-475894 kubelet[3456]: I0617 11:49:21.945488    3456 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d9f9919d4b1badc17cda6d1885676de-k8s-certs\") pod \"kube-controller-manager-pause-475894\" (UID: \"3d9f9919d4b1badc17cda6d1885676de\") " pod="kube-system/kube-controller-manager-pause-475894"
	Jun 17 11:49:21 pause-475894 kubelet[3456]: I0617 11:49:21.945515    3456 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d9f9919d4b1badc17cda6d1885676de-kubeconfig\") pod \"kube-controller-manager-pause-475894\" (UID: \"3d9f9919d4b1badc17cda6d1885676de\") " pod="kube-system/kube-controller-manager-pause-475894"
	Jun 17 11:49:21 pause-475894 kubelet[3456]: E0617 11:49:21.945681    3456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-475894?timeout=10s\": dial tcp 192.168.50.122:8443: connect: connection refused" interval="400ms"
	Jun 17 11:49:22 pause-475894 kubelet[3456]: I0617 11:49:22.053796    3456 kubelet_node_status.go:73] "Attempting to register node" node="pause-475894"
	Jun 17 11:49:22 pause-475894 kubelet[3456]: E0617 11:49:22.054700    3456 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.122:8443: connect: connection refused" node="pause-475894"
	Jun 17 11:49:22 pause-475894 kubelet[3456]: I0617 11:49:22.208778    3456 scope.go:117] "RemoveContainer" containerID="50797d5733f6a349cbb691d979717866beb3e73a560d51d4df857c969b0db3a1"
	Jun 17 11:49:22 pause-475894 kubelet[3456]: I0617 11:49:22.211278    3456 scope.go:117] "RemoveContainer" containerID="e7d21c1e0a0850daab8094237a2bed1e59ce3f7d540d76a69c7405f5d603ef40"
	Jun 17 11:49:22 pause-475894 kubelet[3456]: I0617 11:49:22.211600    3456 scope.go:117] "RemoveContainer" containerID="35d99078f029ec2aa0d136fb2fd2494ac10bebfa02fe3f231f166c05a1ec6665"
	Jun 17 11:49:22 pause-475894 kubelet[3456]: I0617 11:49:22.211932    3456 scope.go:117] "RemoveContainer" containerID="4fe5cbc7009bd88a153d92ce34fc2267ee234d73b03bf648be4f82592c7a4277"
	Jun 17 11:49:22 pause-475894 kubelet[3456]: E0617 11:49:22.347184    3456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-475894?timeout=10s\": dial tcp 192.168.50.122:8443: connect: connection refused" interval="800ms"
	Jun 17 11:49:22 pause-475894 kubelet[3456]: I0617 11:49:22.456374    3456 kubelet_node_status.go:73] "Attempting to register node" node="pause-475894"
	Jun 17 11:49:22 pause-475894 kubelet[3456]: E0617 11:49:22.457045    3456 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.122:8443: connect: connection refused" node="pause-475894"
	Jun 17 11:49:23 pause-475894 kubelet[3456]: I0617 11:49:23.259124    3456 kubelet_node_status.go:73] "Attempting to register node" node="pause-475894"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.670718    3456 kubelet_node_status.go:112] "Node was previously registered" node="pause-475894"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.671300    3456 kubelet_node_status.go:76] "Successfully registered node" node="pause-475894"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.672973    3456 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.674532    3456 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.729906    3456 apiserver.go:52] "Watching apiserver"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.732895    3456 topology_manager.go:215] "Topology Admit Handler" podUID="a369273f-a5a9-4d41-bacf-f6ba17fecc7f" podNamespace="kube-system" podName="kube-proxy-shbhn"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.733118    3456 topology_manager.go:215] "Topology Admit Handler" podUID="1f7df81e-d372-415e-a2ff-b6d968634f17" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ng69p"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.740120    3456 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.806489    3456 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a369273f-a5a9-4d41-bacf-f6ba17fecc7f-lib-modules\") pod \"kube-proxy-shbhn\" (UID: \"a369273f-a5a9-4d41-bacf-f6ba17fecc7f\") " pod="kube-system/kube-proxy-shbhn"
	Jun 17 11:49:25 pause-475894 kubelet[3456]: I0617 11:49:25.806542    3456 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a369273f-a5a9-4d41-bacf-f6ba17fecc7f-xtables-lock\") pod \"kube-proxy-shbhn\" (UID: \"a369273f-a5a9-4d41-bacf-f6ba17fecc7f\") " pod="kube-system/kube-proxy-shbhn"
	Jun 17 11:49:26 pause-475894 kubelet[3456]: I0617 11:49:26.034160    3456 scope.go:117] "RemoveContainer" containerID="b47ea14b20b2545c7063c0efde0b1693c228c5b8a0c5468109e340db79cbd18b"
	Jun 17 11:49:26 pause-475894 kubelet[3456]: I0617 11:49:26.034407    3456 scope.go:117] "RemoveContainer" containerID="cbca6f4efef12044d1eda46a9d9a73e2a2f2ce5910f123ee53ca237230019480"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-475894 -n pause-475894
helpers_test.go:261: (dbg) Run:  kubectl --context pause-475894 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (64.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (281.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-003661 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-003661 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m41.085822902s)

                                                
                                                
-- stdout --
	* [old-k8s-version-003661] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19084
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-003661" primary control-plane node in "old-k8s-version-003661" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:51:58.819939  161767 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:51:58.820079  161767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:51:58.820089  161767 out.go:304] Setting ErrFile to fd 2...
	I0617 11:51:58.820093  161767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:51:58.820758  161767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:51:58.821563  161767 out.go:298] Setting JSON to false
	I0617 11:51:58.822556  161767 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5666,"bootTime":1718619453,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:51:58.822613  161767 start.go:139] virtualization: kvm guest
	I0617 11:51:58.824831  161767 out.go:177] * [old-k8s-version-003661] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:51:58.826595  161767 notify.go:220] Checking for updates...
	I0617 11:51:58.826610  161767 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:51:58.828024  161767 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:51:58.829480  161767 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:51:58.831014  161767 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:51:58.832371  161767 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:51:58.833548  161767 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:51:58.835181  161767 config.go:182] Loaded profile config "cert-expiration-514753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:51:58.835327  161767 config.go:182] Loaded profile config "kubernetes-upgrade-717156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0617 11:51:58.835428  161767 config.go:182] Loaded profile config "stopped-upgrade-066761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0617 11:51:58.835563  161767 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:51:58.871063  161767 out.go:177] * Using the kvm2 driver based on user configuration
	I0617 11:51:58.872429  161767 start.go:297] selected driver: kvm2
	I0617 11:51:58.872444  161767 start.go:901] validating driver "kvm2" against <nil>
	I0617 11:51:58.872455  161767 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:51:58.873213  161767 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:51:58.873279  161767 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 11:51:58.888435  161767 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 11:51:58.888488  161767 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 11:51:58.888717  161767 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:51:58.888809  161767 cni.go:84] Creating CNI manager for ""
	I0617 11:51:58.888827  161767 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:51:58.888839  161767 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 11:51:58.888904  161767 start.go:340] cluster config:
	{Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:51:58.889017  161767 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:51:58.890877  161767 out.go:177] * Starting "old-k8s-version-003661" primary control-plane node in "old-k8s-version-003661" cluster
	I0617 11:51:58.892022  161767 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0617 11:51:58.892068  161767 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0617 11:51:58.892082  161767 cache.go:56] Caching tarball of preloaded images
	I0617 11:51:58.892157  161767 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:51:58.892169  161767 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0617 11:51:58.892264  161767 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/config.json ...
	I0617 11:51:58.892287  161767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/config.json: {Name:mkbc177614f5f50bc471d57520b3134dfdc8af9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:51:58.892439  161767 start.go:360] acquireMachinesLock for old-k8s-version-003661: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:52:11.812605  161767 start.go:364] duration metric: took 12.92013885s to acquireMachinesLock for "old-k8s-version-003661"
	I0617 11:52:11.812678  161767 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 11:52:11.812803  161767 start.go:125] createHost starting for "" (driver="kvm2")
	I0617 11:52:11.847982  161767 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 11:52:11.848210  161767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:52:11.848276  161767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:52:11.863986  161767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41189
	I0617 11:52:11.864441  161767 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:52:11.864997  161767 main.go:141] libmachine: Using API Version  1
	I0617 11:52:11.865023  161767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:52:11.865349  161767 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:52:11.865545  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 11:52:11.865735  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 11:52:11.865904  161767 start.go:159] libmachine.API.Create for "old-k8s-version-003661" (driver="kvm2")
	I0617 11:52:11.865938  161767 client.go:168] LocalClient.Create starting
	I0617 11:52:11.865971  161767 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem
	I0617 11:52:11.866002  161767 main.go:141] libmachine: Decoding PEM data...
	I0617 11:52:11.866029  161767 main.go:141] libmachine: Parsing certificate...
	I0617 11:52:11.866084  161767 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem
	I0617 11:52:11.866106  161767 main.go:141] libmachine: Decoding PEM data...
	I0617 11:52:11.866117  161767 main.go:141] libmachine: Parsing certificate...
	I0617 11:52:11.866130  161767 main.go:141] libmachine: Running pre-create checks...
	I0617 11:52:11.866140  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .PreCreateCheck
	I0617 11:52:11.866470  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetConfigRaw
	I0617 11:52:11.866848  161767 main.go:141] libmachine: Creating machine...
	I0617 11:52:11.866862  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .Create
	I0617 11:52:11.867081  161767 main.go:141] libmachine: (old-k8s-version-003661) Creating KVM machine...
	I0617 11:52:11.868516  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | found existing default KVM network
	I0617 11:52:11.870069  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:11.869860  161879 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b1:ae:a2} reservation:<nil>}
	I0617 11:52:11.871035  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:11.870929  161879 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f6:36:2e} reservation:<nil>}
	I0617 11:52:11.872387  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:11.872295  161879 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003ae170}
	I0617 11:52:11.872415  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | created network xml: 
	I0617 11:52:11.872426  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | <network>
	I0617 11:52:11.872435  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG |   <name>mk-old-k8s-version-003661</name>
	I0617 11:52:11.872450  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG |   <dns enable='no'/>
	I0617 11:52:11.872463  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG |   
	I0617 11:52:11.872478  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0617 11:52:11.872489  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG |     <dhcp>
	I0617 11:52:11.872507  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0617 11:52:11.872520  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG |     </dhcp>
	I0617 11:52:11.872529  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG |   </ip>
	I0617 11:52:11.872536  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG |   
	I0617 11:52:11.872544  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | </network>
	I0617 11:52:11.872556  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | 
	I0617 11:52:12.055840  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | trying to create private KVM network mk-old-k8s-version-003661 192.168.61.0/24...
	I0617 11:52:12.126822  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | private KVM network mk-old-k8s-version-003661 192.168.61.0/24 created
	I0617 11:52:12.126865  161767 main.go:141] libmachine: (old-k8s-version-003661) Setting up store path in /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661 ...
	I0617 11:52:12.126879  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:12.126785  161879 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:52:12.126896  161767 main.go:141] libmachine: (old-k8s-version-003661) Building disk image from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso
	I0617 11:52:12.127068  161767 main.go:141] libmachine: (old-k8s-version-003661) Downloading /home/jenkins/minikube-integration/19084-112967/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0617 11:52:12.367582  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:12.367389  161879 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa...
	I0617 11:52:12.468249  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:12.468112  161879 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/old-k8s-version-003661.rawdisk...
	I0617 11:52:12.468301  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | Writing magic tar header
	I0617 11:52:12.468319  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | Writing SSH key tar header
	I0617 11:52:12.468334  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:12.468246  161879 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661 ...
	I0617 11:52:12.468356  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661
	I0617 11:52:12.468431  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines
	I0617 11:52:12.468453  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:52:12.468471  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967
	I0617 11:52:12.468492  161767 main.go:141] libmachine: (old-k8s-version-003661) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661 (perms=drwx------)
	I0617 11:52:12.468505  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0617 11:52:12.468516  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | Checking permissions on dir: /home/jenkins
	I0617 11:52:12.468524  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | Checking permissions on dir: /home
	I0617 11:52:12.468534  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | Skipping /home - not owner
	I0617 11:52:12.468550  161767 main.go:141] libmachine: (old-k8s-version-003661) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines (perms=drwxr-xr-x)
	I0617 11:52:12.468564  161767 main.go:141] libmachine: (old-k8s-version-003661) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube (perms=drwxr-xr-x)
	I0617 11:52:12.468576  161767 main.go:141] libmachine: (old-k8s-version-003661) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967 (perms=drwxrwxr-x)
	I0617 11:52:12.468586  161767 main.go:141] libmachine: (old-k8s-version-003661) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0617 11:52:12.468597  161767 main.go:141] libmachine: (old-k8s-version-003661) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0617 11:52:12.468604  161767 main.go:141] libmachine: (old-k8s-version-003661) Creating domain...
	I0617 11:52:12.469701  161767 main.go:141] libmachine: (old-k8s-version-003661) define libvirt domain using xml: 
	I0617 11:52:12.469726  161767 main.go:141] libmachine: (old-k8s-version-003661) <domain type='kvm'>
	I0617 11:52:12.469748  161767 main.go:141] libmachine: (old-k8s-version-003661)   <name>old-k8s-version-003661</name>
	I0617 11:52:12.469767  161767 main.go:141] libmachine: (old-k8s-version-003661)   <memory unit='MiB'>2200</memory>
	I0617 11:52:12.469780  161767 main.go:141] libmachine: (old-k8s-version-003661)   <vcpu>2</vcpu>
	I0617 11:52:12.469787  161767 main.go:141] libmachine: (old-k8s-version-003661)   <features>
	I0617 11:52:12.469795  161767 main.go:141] libmachine: (old-k8s-version-003661)     <acpi/>
	I0617 11:52:12.469809  161767 main.go:141] libmachine: (old-k8s-version-003661)     <apic/>
	I0617 11:52:12.469820  161767 main.go:141] libmachine: (old-k8s-version-003661)     <pae/>
	I0617 11:52:12.469827  161767 main.go:141] libmachine: (old-k8s-version-003661)     
	I0617 11:52:12.469839  161767 main.go:141] libmachine: (old-k8s-version-003661)   </features>
	I0617 11:52:12.469847  161767 main.go:141] libmachine: (old-k8s-version-003661)   <cpu mode='host-passthrough'>
	I0617 11:52:12.469855  161767 main.go:141] libmachine: (old-k8s-version-003661)   
	I0617 11:52:12.469869  161767 main.go:141] libmachine: (old-k8s-version-003661)   </cpu>
	I0617 11:52:12.469877  161767 main.go:141] libmachine: (old-k8s-version-003661)   <os>
	I0617 11:52:12.469888  161767 main.go:141] libmachine: (old-k8s-version-003661)     <type>hvm</type>
	I0617 11:52:12.469899  161767 main.go:141] libmachine: (old-k8s-version-003661)     <boot dev='cdrom'/>
	I0617 11:52:12.469909  161767 main.go:141] libmachine: (old-k8s-version-003661)     <boot dev='hd'/>
	I0617 11:52:12.469920  161767 main.go:141] libmachine: (old-k8s-version-003661)     <bootmenu enable='no'/>
	I0617 11:52:12.469931  161767 main.go:141] libmachine: (old-k8s-version-003661)   </os>
	I0617 11:52:12.469938  161767 main.go:141] libmachine: (old-k8s-version-003661)   <devices>
	I0617 11:52:12.469956  161767 main.go:141] libmachine: (old-k8s-version-003661)     <disk type='file' device='cdrom'>
	I0617 11:52:12.469975  161767 main.go:141] libmachine: (old-k8s-version-003661)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/boot2docker.iso'/>
	I0617 11:52:12.469993  161767 main.go:141] libmachine: (old-k8s-version-003661)       <target dev='hdc' bus='scsi'/>
	I0617 11:52:12.470004  161767 main.go:141] libmachine: (old-k8s-version-003661)       <readonly/>
	I0617 11:52:12.470011  161767 main.go:141] libmachine: (old-k8s-version-003661)     </disk>
	I0617 11:52:12.470024  161767 main.go:141] libmachine: (old-k8s-version-003661)     <disk type='file' device='disk'>
	I0617 11:52:12.470038  161767 main.go:141] libmachine: (old-k8s-version-003661)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0617 11:52:12.470057  161767 main.go:141] libmachine: (old-k8s-version-003661)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/old-k8s-version-003661.rawdisk'/>
	I0617 11:52:12.470069  161767 main.go:141] libmachine: (old-k8s-version-003661)       <target dev='hda' bus='virtio'/>
	I0617 11:52:12.470083  161767 main.go:141] libmachine: (old-k8s-version-003661)     </disk>
	I0617 11:52:12.470093  161767 main.go:141] libmachine: (old-k8s-version-003661)     <interface type='network'>
	I0617 11:52:12.470112  161767 main.go:141] libmachine: (old-k8s-version-003661)       <source network='mk-old-k8s-version-003661'/>
	I0617 11:52:12.470128  161767 main.go:141] libmachine: (old-k8s-version-003661)       <model type='virtio'/>
	I0617 11:52:12.470140  161767 main.go:141] libmachine: (old-k8s-version-003661)     </interface>
	I0617 11:52:12.470151  161767 main.go:141] libmachine: (old-k8s-version-003661)     <interface type='network'>
	I0617 11:52:12.470163  161767 main.go:141] libmachine: (old-k8s-version-003661)       <source network='default'/>
	I0617 11:52:12.470173  161767 main.go:141] libmachine: (old-k8s-version-003661)       <model type='virtio'/>
	I0617 11:52:12.470183  161767 main.go:141] libmachine: (old-k8s-version-003661)     </interface>
	I0617 11:52:12.470193  161767 main.go:141] libmachine: (old-k8s-version-003661)     <serial type='pty'>
	I0617 11:52:12.470205  161767 main.go:141] libmachine: (old-k8s-version-003661)       <target port='0'/>
	I0617 11:52:12.470217  161767 main.go:141] libmachine: (old-k8s-version-003661)     </serial>
	I0617 11:52:12.470255  161767 main.go:141] libmachine: (old-k8s-version-003661)     <console type='pty'>
	I0617 11:52:12.470276  161767 main.go:141] libmachine: (old-k8s-version-003661)       <target type='serial' port='0'/>
	I0617 11:52:12.470290  161767 main.go:141] libmachine: (old-k8s-version-003661)     </console>
	I0617 11:52:12.470301  161767 main.go:141] libmachine: (old-k8s-version-003661)     <rng model='virtio'>
	I0617 11:52:12.470316  161767 main.go:141] libmachine: (old-k8s-version-003661)       <backend model='random'>/dev/random</backend>
	I0617 11:52:12.470328  161767 main.go:141] libmachine: (old-k8s-version-003661)     </rng>
	I0617 11:52:12.470340  161767 main.go:141] libmachine: (old-k8s-version-003661)     
	I0617 11:52:12.470349  161767 main.go:141] libmachine: (old-k8s-version-003661)     
	I0617 11:52:12.470359  161767 main.go:141] libmachine: (old-k8s-version-003661)   </devices>
	I0617 11:52:12.470369  161767 main.go:141] libmachine: (old-k8s-version-003661) </domain>
	I0617 11:52:12.470385  161767 main.go:141] libmachine: (old-k8s-version-003661) 
	I0617 11:52:12.473933  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:8b:15:01 in network default
	I0617 11:52:12.474688  161767 main.go:141] libmachine: (old-k8s-version-003661) Ensuring networks are active...
	I0617 11:52:12.474715  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:12.475526  161767 main.go:141] libmachine: (old-k8s-version-003661) Ensuring network default is active
	I0617 11:52:12.475964  161767 main.go:141] libmachine: (old-k8s-version-003661) Ensuring network mk-old-k8s-version-003661 is active
	I0617 11:52:12.476704  161767 main.go:141] libmachine: (old-k8s-version-003661) Getting domain xml...
	I0617 11:52:12.477700  161767 main.go:141] libmachine: (old-k8s-version-003661) Creating domain...
	I0617 11:52:13.844414  161767 main.go:141] libmachine: (old-k8s-version-003661) Waiting to get IP...
	I0617 11:52:13.845618  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:13.846243  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 11:52:13.846274  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:13.846224  161879 retry.go:31] will retry after 241.039494ms: waiting for machine to come up
	I0617 11:52:14.088992  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:14.089656  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 11:52:14.089692  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:14.089606  161879 retry.go:31] will retry after 366.685912ms: waiting for machine to come up
	I0617 11:52:14.458266  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:14.458834  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 11:52:14.458863  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:14.458795  161879 retry.go:31] will retry after 439.362838ms: waiting for machine to come up
	I0617 11:52:14.899542  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:14.900117  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 11:52:14.900150  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:14.900068  161879 retry.go:31] will retry after 575.411118ms: waiting for machine to come up
	I0617 11:52:15.476697  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:15.477416  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 11:52:15.477450  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:15.477346  161879 retry.go:31] will retry after 475.150413ms: waiting for machine to come up
	I0617 11:52:15.954141  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:15.954701  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 11:52:15.954725  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:15.954655  161879 retry.go:31] will retry after 855.594386ms: waiting for machine to come up
	I0617 11:52:16.811550  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:16.812009  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 11:52:16.812039  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:16.811958  161879 retry.go:31] will retry after 1.176264217s: waiting for machine to come up
	I0617 11:52:17.989789  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:17.990328  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 11:52:17.990375  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:17.990275  161879 retry.go:31] will retry after 1.022700668s: waiting for machine to come up
	I0617 11:52:19.014434  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:19.014981  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 11:52:19.015030  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:19.014926  161879 retry.go:31] will retry after 1.343236868s: waiting for machine to come up
	I0617 11:52:20.359540  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:20.360056  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 11:52:20.360094  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:20.360004  161879 retry.go:31] will retry after 2.237514841s: waiting for machine to come up
	I0617 11:52:22.599013  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:22.599583  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 11:52:22.599612  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:22.599532  161879 retry.go:31] will retry after 1.83856174s: waiting for machine to come up
	I0617 11:52:24.439852  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:24.440447  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 11:52:24.440480  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:24.440380  161879 retry.go:31] will retry after 3.449543528s: waiting for machine to come up
	I0617 11:52:27.892105  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:27.892709  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 11:52:27.892744  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:27.892640  161879 retry.go:31] will retry after 2.851626611s: waiting for machine to come up
	I0617 11:52:30.745399  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:30.745992  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 11:52:30.746021  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 11:52:30.745945  161879 retry.go:31] will retry after 3.623407337s: waiting for machine to come up
	I0617 11:52:34.372239  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:34.372823  161767 main.go:141] libmachine: (old-k8s-version-003661) Found IP for machine: 192.168.61.164
	I0617 11:52:34.372842  161767 main.go:141] libmachine: (old-k8s-version-003661) Reserving static IP address...
	I0617 11:52:34.372857  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has current primary IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:34.373251  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-003661", mac: "52:54:00:76:66:a0", ip: "192.168.61.164"} in network mk-old-k8s-version-003661
	I0617 11:52:34.454314  161767 main.go:141] libmachine: (old-k8s-version-003661) Reserved static IP address: 192.168.61.164
	I0617 11:52:34.454345  161767 main.go:141] libmachine: (old-k8s-version-003661) Waiting for SSH to be available...
	I0617 11:52:34.454355  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | Getting to WaitForSSH function...
	I0617 11:52:34.457077  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:34.457562  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 12:52:26 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:minikube Clientid:01:52:54:00:76:66:a0}
	I0617 11:52:34.457603  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:34.457772  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | Using SSH client type: external
	I0617 11:52:34.457803  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa (-rw-------)
	I0617 11:52:34.457842  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 11:52:34.457854  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | About to run SSH command:
	I0617 11:52:34.457875  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | exit 0
	I0617 11:52:34.587594  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | SSH cmd err, output: <nil>: 
	I0617 11:52:34.587849  161767 main.go:141] libmachine: (old-k8s-version-003661) KVM machine creation complete!
	I0617 11:52:34.588228  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetConfigRaw
	I0617 11:52:34.588787  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 11:52:34.589008  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 11:52:34.589163  161767 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0617 11:52:34.589180  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetState
	I0617 11:52:34.590398  161767 main.go:141] libmachine: Detecting operating system of created instance...
	I0617 11:52:34.590416  161767 main.go:141] libmachine: Waiting for SSH to be available...
	I0617 11:52:34.590424  161767 main.go:141] libmachine: Getting to WaitForSSH function...
	I0617 11:52:34.590432  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 11:52:34.592533  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:34.592891  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 12:52:26 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 11:52:34.592924  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:34.593103  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 11:52:34.593293  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 11:52:34.593452  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 11:52:34.593613  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 11:52:34.593770  161767 main.go:141] libmachine: Using SSH client type: native
	I0617 11:52:34.594014  161767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 11:52:34.594028  161767 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0617 11:52:34.706856  161767 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:52:34.706880  161767 main.go:141] libmachine: Detecting the provisioner...
	I0617 11:52:34.706890  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 11:52:34.709520  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:34.709854  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 12:52:26 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 11:52:34.709886  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:34.710127  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 11:52:34.710323  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 11:52:34.710494  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 11:52:34.710600  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 11:52:34.710771  161767 main.go:141] libmachine: Using SSH client type: native
	I0617 11:52:34.710998  161767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 11:52:34.711017  161767 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0617 11:52:34.824462  161767 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0617 11:52:34.824531  161767 main.go:141] libmachine: found compatible host: buildroot
	I0617 11:52:34.824542  161767 main.go:141] libmachine: Provisioning with buildroot...
	I0617 11:52:34.824550  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 11:52:34.824787  161767 buildroot.go:166] provisioning hostname "old-k8s-version-003661"
	I0617 11:52:34.824815  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 11:52:34.825047  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 11:52:34.827593  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:34.827986  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 12:52:26 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 11:52:34.828012  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:34.828177  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 11:52:34.828346  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 11:52:34.828476  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 11:52:34.828582  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 11:52:34.828775  161767 main.go:141] libmachine: Using SSH client type: native
	I0617 11:52:34.828946  161767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 11:52:34.828958  161767 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-003661 && echo "old-k8s-version-003661" | sudo tee /etc/hostname
	I0617 11:52:34.953485  161767 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-003661
	
	I0617 11:52:34.953519  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 11:52:34.956423  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:34.957019  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 12:52:26 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 11:52:34.957043  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:34.957303  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 11:52:34.957505  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 11:52:34.957696  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 11:52:34.957851  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 11:52:34.958015  161767 main.go:141] libmachine: Using SSH client type: native
	I0617 11:52:34.958240  161767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 11:52:34.958272  161767 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-003661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-003661/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-003661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 11:52:35.076250  161767 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:52:35.076286  161767 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 11:52:35.076325  161767 buildroot.go:174] setting up certificates
	I0617 11:52:35.076335  161767 provision.go:84] configureAuth start
	I0617 11:52:35.076348  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 11:52:35.076598  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 11:52:35.078996  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.079386  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 12:52:26 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 11:52:35.079416  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.079606  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 11:52:35.081661  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.081967  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 12:52:26 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 11:52:35.081988  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.082118  161767 provision.go:143] copyHostCerts
	I0617 11:52:35.082183  161767 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 11:52:35.082193  161767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 11:52:35.082257  161767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 11:52:35.082351  161767 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 11:52:35.082359  161767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 11:52:35.082383  161767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 11:52:35.082436  161767 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 11:52:35.082443  161767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 11:52:35.082468  161767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 11:52:35.082511  161767 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-003661 san=[127.0.0.1 192.168.61.164 localhost minikube old-k8s-version-003661]
	I0617 11:52:35.173289  161767 provision.go:177] copyRemoteCerts
	I0617 11:52:35.173344  161767 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 11:52:35.173369  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 11:52:35.175988  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.176321  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 12:52:26 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 11:52:35.176354  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.176537  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 11:52:35.176731  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 11:52:35.176877  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 11:52:35.176980  161767 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 11:52:35.262154  161767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 11:52:35.286235  161767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0617 11:52:35.310158  161767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 11:52:35.333576  161767 provision.go:87] duration metric: took 257.229592ms to configureAuth
	I0617 11:52:35.333598  161767 buildroot.go:189] setting minikube options for container-runtime
	I0617 11:52:35.333751  161767 config.go:182] Loaded profile config "old-k8s-version-003661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0617 11:52:35.333817  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 11:52:35.336474  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.336847  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 12:52:26 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 11:52:35.336876  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.337110  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 11:52:35.337329  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 11:52:35.337488  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 11:52:35.337633  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 11:52:35.337834  161767 main.go:141] libmachine: Using SSH client type: native
	I0617 11:52:35.338010  161767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 11:52:35.338029  161767 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 11:52:35.603249  161767 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 11:52:35.603283  161767 main.go:141] libmachine: Checking connection to Docker...
	I0617 11:52:35.603293  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetURL
	I0617 11:52:35.604511  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | Using libvirt version 6000000
	I0617 11:52:35.606541  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.606892  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 12:52:26 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 11:52:35.606924  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.607033  161767 main.go:141] libmachine: Docker is up and running!
	I0617 11:52:35.607047  161767 main.go:141] libmachine: Reticulating splines...
	I0617 11:52:35.607056  161767 client.go:171] duration metric: took 23.741106433s to LocalClient.Create
	I0617 11:52:35.607088  161767 start.go:167] duration metric: took 23.741185149s to libmachine.API.Create "old-k8s-version-003661"
	I0617 11:52:35.607100  161767 start.go:293] postStartSetup for "old-k8s-version-003661" (driver="kvm2")
	I0617 11:52:35.607114  161767 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 11:52:35.607137  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 11:52:35.607379  161767 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 11:52:35.607402  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 11:52:35.609442  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.609794  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 12:52:26 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 11:52:35.609820  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.609966  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 11:52:35.610178  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 11:52:35.610326  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 11:52:35.610478  161767 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 11:52:35.698297  161767 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 11:52:35.702786  161767 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 11:52:35.702813  161767 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 11:52:35.702875  161767 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 11:52:35.702947  161767 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 11:52:35.703035  161767 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 11:52:35.713188  161767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:52:35.737934  161767 start.go:296] duration metric: took 130.815119ms for postStartSetup
	I0617 11:52:35.738008  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetConfigRaw
	I0617 11:52:35.738571  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 11:52:35.741147  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.741525  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 12:52:26 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 11:52:35.741556  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.741833  161767 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/config.json ...
	I0617 11:52:35.742019  161767 start.go:128] duration metric: took 23.929203476s to createHost
	I0617 11:52:35.742059  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 11:52:35.744246  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.744593  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 12:52:26 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 11:52:35.744623  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.744729  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 11:52:35.744924  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 11:52:35.745081  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 11:52:35.745250  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 11:52:35.745416  161767 main.go:141] libmachine: Using SSH client type: native
	I0617 11:52:35.745641  161767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 11:52:35.745661  161767 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0617 11:52:35.860868  161767 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625155.835034218
	
	I0617 11:52:35.860892  161767 fix.go:216] guest clock: 1718625155.835034218
	I0617 11:52:35.860900  161767 fix.go:229] Guest: 2024-06-17 11:52:35.835034218 +0000 UTC Remote: 2024-06-17 11:52:35.742031364 +0000 UTC m=+36.957650609 (delta=93.002854ms)
	I0617 11:52:35.860919  161767 fix.go:200] guest clock delta is within tolerance: 93.002854ms
	I0617 11:52:35.860924  161767 start.go:83] releasing machines lock for "old-k8s-version-003661", held for 24.048289159s
	I0617 11:52:35.860946  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 11:52:35.861276  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 11:52:35.864579  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.865037  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 12:52:26 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 11:52:35.865148  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.865320  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 11:52:35.865965  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 11:52:35.866224  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 11:52:35.866316  161767 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 11:52:35.866374  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 11:52:35.866441  161767 ssh_runner.go:195] Run: cat /version.json
	I0617 11:52:35.866466  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 11:52:35.869198  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.869594  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 12:52:26 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 11:52:35.869624  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.869644  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.869954  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 11:52:35.870013  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 12:52:26 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 11:52:35.870071  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:35.870161  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 11:52:35.870329  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 11:52:35.870347  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 11:52:35.870497  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 11:52:35.870516  161767 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 11:52:35.870647  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 11:52:35.870793  161767 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 11:52:35.952916  161767 ssh_runner.go:195] Run: systemctl --version
	I0617 11:52:35.976966  161767 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 11:52:36.154788  161767 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 11:52:36.161405  161767 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 11:52:36.161470  161767 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 11:52:36.178633  161767 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 11:52:36.178667  161767 start.go:494] detecting cgroup driver to use...
	I0617 11:52:36.178751  161767 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 11:52:36.195595  161767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 11:52:36.210671  161767 docker.go:217] disabling cri-docker service (if available) ...
	I0617 11:52:36.210753  161767 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 11:52:36.226611  161767 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 11:52:36.241118  161767 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 11:52:36.353261  161767 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 11:52:36.527076  161767 docker.go:233] disabling docker service ...
	I0617 11:52:36.527144  161767 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 11:52:36.541736  161767 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 11:52:36.555223  161767 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 11:52:36.678636  161767 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 11:52:36.799219  161767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 11:52:36.813066  161767 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 11:52:36.831703  161767 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0617 11:52:36.831776  161767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:52:36.842101  161767 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 11:52:36.842151  161767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:52:36.852946  161767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:52:36.863565  161767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 11:52:36.874721  161767 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 11:52:36.885538  161767 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 11:52:36.895430  161767 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 11:52:36.895517  161767 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 11:52:36.911217  161767 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 11:52:36.924768  161767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:52:37.045332  161767 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 11:52:37.191643  161767 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 11:52:37.191728  161767 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 11:52:37.196947  161767 start.go:562] Will wait 60s for crictl version
	I0617 11:52:37.197030  161767 ssh_runner.go:195] Run: which crictl
	I0617 11:52:37.201251  161767 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 11:52:37.245145  161767 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 11:52:37.245242  161767 ssh_runner.go:195] Run: crio --version
	I0617 11:52:37.279952  161767 ssh_runner.go:195] Run: crio --version
	I0617 11:52:37.310602  161767 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0617 11:52:37.311781  161767 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 11:52:37.314523  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:37.314906  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 12:52:26 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 11:52:37.314938  161767 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 11:52:37.315136  161767 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0617 11:52:37.319562  161767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:52:37.332055  161767 kubeadm.go:877] updating cluster {Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.164 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 11:52:37.332190  161767 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0617 11:52:37.332240  161767 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:52:37.368917  161767 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0617 11:52:37.368982  161767 ssh_runner.go:195] Run: which lz4
	I0617 11:52:37.373290  161767 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0617 11:52:37.377719  161767 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 11:52:37.377762  161767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0617 11:52:39.162373  161767 crio.go:462] duration metric: took 1.789098824s to copy over tarball
	I0617 11:52:39.162480  161767 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 11:52:41.729323  161767 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.566809289s)
	I0617 11:52:41.729350  161767 crio.go:469] duration metric: took 2.566944502s to extract the tarball
	I0617 11:52:41.729364  161767 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 11:52:41.771987  161767 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:52:41.816465  161767 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0617 11:52:41.816492  161767 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 11:52:41.816570  161767 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 11:52:41.816584  161767 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 11:52:41.816585  161767 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 11:52:41.816585  161767 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0617 11:52:41.816628  161767 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 11:52:41.816649  161767 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0617 11:52:41.816628  161767 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 11:52:41.816837  161767 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0617 11:52:41.817971  161767 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 11:52:41.818144  161767 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 11:52:41.818196  161767 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0617 11:52:41.818154  161767 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0617 11:52:41.818155  161767 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 11:52:41.818160  161767 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0617 11:52:41.818211  161767 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 11:52:41.818561  161767 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 11:52:41.978161  161767 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0617 11:52:41.995497  161767 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0617 11:52:41.995701  161767 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 11:52:42.002285  161767 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0617 11:52:42.003677  161767 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0617 11:52:42.009600  161767 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0617 11:52:42.055216  161767 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0617 11:52:42.055282  161767 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0617 11:52:42.055345  161767 ssh_runner.go:195] Run: which crictl
	I0617 11:52:42.127520  161767 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 11:52:42.137719  161767 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0617 11:52:42.177139  161767 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0617 11:52:42.177179  161767 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0617 11:52:42.177198  161767 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 11:52:42.177223  161767 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 11:52:42.177251  161767 ssh_runner.go:195] Run: which crictl
	I0617 11:52:42.177267  161767 ssh_runner.go:195] Run: which crictl
	I0617 11:52:42.177304  161767 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0617 11:52:42.177334  161767 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 11:52:42.177336  161767 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0617 11:52:42.177362  161767 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0617 11:52:42.177369  161767 ssh_runner.go:195] Run: which crictl
	I0617 11:52:42.177380  161767 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0617 11:52:42.177399  161767 ssh_runner.go:195] Run: which crictl
	I0617 11:52:42.177400  161767 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 11:52:42.177440  161767 ssh_runner.go:195] Run: which crictl
	I0617 11:52:42.177442  161767 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0617 11:52:42.308719  161767 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0617 11:52:42.308771  161767 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0617 11:52:42.308825  161767 ssh_runner.go:195] Run: which crictl
	I0617 11:52:42.308833  161767 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 11:52:42.308860  161767 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0617 11:52:42.308925  161767 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0617 11:52:42.308959  161767 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0617 11:52:42.308974  161767 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0617 11:52:42.309060  161767 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0617 11:52:42.377385  161767 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0617 11:52:42.377536  161767 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0617 11:52:42.387182  161767 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0617 11:52:42.446148  161767 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0617 11:52:42.446209  161767 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0617 11:52:42.446418  161767 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0617 11:52:42.446578  161767 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0617 11:52:42.446628  161767 cache_images.go:92] duration metric: took 630.118103ms to LoadCachedImages
	W0617 11:52:42.446723  161767 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0617 11:52:42.446739  161767 kubeadm.go:928] updating node { 192.168.61.164 8443 v1.20.0 crio true true} ...
	I0617 11:52:42.446856  161767 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-003661 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 11:52:42.446932  161767 ssh_runner.go:195] Run: crio config
	I0617 11:52:42.502579  161767 cni.go:84] Creating CNI manager for ""
	I0617 11:52:42.502604  161767 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:52:42.502616  161767 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 11:52:42.502644  161767 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.164 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-003661 NodeName:old-k8s-version-003661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0617 11:52:42.502829  161767 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-003661"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 11:52:42.502910  161767 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0617 11:52:42.513418  161767 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 11:52:42.513480  161767 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 11:52:42.523393  161767 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0617 11:52:42.540890  161767 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 11:52:42.561885  161767 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0617 11:52:42.582411  161767 ssh_runner.go:195] Run: grep 192.168.61.164	control-plane.minikube.internal$ /etc/hosts
	I0617 11:52:42.586381  161767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:52:42.599627  161767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:52:42.748886  161767 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:52:42.766455  161767 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661 for IP: 192.168.61.164
	I0617 11:52:42.766475  161767 certs.go:194] generating shared ca certs ...
	I0617 11:52:42.766491  161767 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:52:42.766653  161767 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 11:52:42.766720  161767 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 11:52:42.766735  161767 certs.go:256] generating profile certs ...
	I0617 11:52:42.766812  161767 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.key
	I0617 11:52:42.766827  161767 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.crt with IP's: []
	I0617 11:52:42.828167  161767 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.crt ...
	I0617 11:52:42.828192  161767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.crt: {Name:mkb6771524326df43debc1e5b7dc1b5c8725798b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:52:42.828373  161767 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.key ...
	I0617 11:52:42.828391  161767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.key: {Name:mk5766cfc51e9d055e8e303d1b362bfdfbf50925 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:52:42.828500  161767 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key.6c1f259c
	I0617 11:52:42.828523  161767 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.crt.6c1f259c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.164]
	I0617 11:52:43.164244  161767 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.crt.6c1f259c ...
	I0617 11:52:43.164279  161767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.crt.6c1f259c: {Name:mkcda7577f0d3703bff5abccc0b6188388fae4ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:52:43.164476  161767 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key.6c1f259c ...
	I0617 11:52:43.164503  161767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key.6c1f259c: {Name:mkb9bddf29a4ba5ddb6399c7587890b25cdc6bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:52:43.164609  161767 certs.go:381] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.crt.6c1f259c -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.crt
	I0617 11:52:43.164733  161767 certs.go:385] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key.6c1f259c -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key
	I0617 11:52:43.164821  161767 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.key
	I0617 11:52:43.164845  161767 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.crt with IP's: []
	I0617 11:52:43.315066  161767 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.crt ...
	I0617 11:52:43.315124  161767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.crt: {Name:mkd3cdacd2e732d17bc57089336395bed0e55297 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:52:43.315330  161767 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.key ...
	I0617 11:52:43.315355  161767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.key: {Name:mk8cb424a47a9a133330c233f54380f79fe330a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:52:43.315665  161767 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 11:52:43.315725  161767 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 11:52:43.315741  161767 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 11:52:43.315777  161767 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 11:52:43.315807  161767 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 11:52:43.315838  161767 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 11:52:43.315895  161767 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 11:52:43.316752  161767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 11:52:43.342625  161767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 11:52:43.367672  161767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 11:52:43.393228  161767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 11:52:43.418653  161767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0617 11:52:43.443950  161767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 11:52:43.468959  161767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 11:52:43.494533  161767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0617 11:52:43.519499  161767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 11:52:43.544032  161767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 11:52:43.573020  161767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 11:52:43.599997  161767 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 11:52:43.616787  161767 ssh_runner.go:195] Run: openssl version
	I0617 11:52:43.623750  161767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 11:52:43.642589  161767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:52:43.649633  161767 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:52:43.649711  161767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:52:43.658087  161767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 11:52:43.673978  161767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 11:52:43.694003  161767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 11:52:43.699043  161767 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 11:52:43.699112  161767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 11:52:43.705350  161767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 11:52:43.716306  161767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 11:52:43.727211  161767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 11:52:43.731697  161767 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 11:52:43.731745  161767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 11:52:43.737485  161767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 11:52:43.748319  161767 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 11:52:43.752563  161767 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0617 11:52:43.752616  161767 kubeadm.go:391] StartCluster: {Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.164 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:52:43.752716  161767 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 11:52:43.752769  161767 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 11:52:43.790279  161767 cri.go:89] found id: ""
	I0617 11:52:43.790343  161767 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0617 11:52:43.800683  161767 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 11:52:43.810211  161767 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 11:52:43.819904  161767 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 11:52:43.819928  161767 kubeadm.go:156] found existing configuration files:
	
	I0617 11:52:43.819977  161767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 11:52:43.830044  161767 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 11:52:43.911294  161767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 11:52:43.921925  161767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 11:52:43.931552  161767 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 11:52:43.931626  161767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 11:52:43.941453  161767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 11:52:43.951013  161767 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 11:52:43.951097  161767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 11:52:43.960627  161767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 11:52:43.969937  161767 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 11:52:43.969998  161767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 11:52:43.979331  161767 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 11:52:44.092160  161767 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0617 11:52:44.092224  161767 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 11:52:44.241804  161767 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 11:52:44.241972  161767 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 11:52:44.242087  161767 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 11:52:44.428763  161767 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 11:52:44.544013  161767 out.go:204]   - Generating certificates and keys ...
	I0617 11:52:44.544165  161767 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 11:52:44.544259  161767 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 11:52:44.575357  161767 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0617 11:52:44.754852  161767 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0617 11:52:45.069348  161767 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0617 11:52:45.205413  161767 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0617 11:52:45.382059  161767 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0617 11:52:45.382279  161767 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-003661] and IPs [192.168.61.164 127.0.0.1 ::1]
	I0617 11:52:45.507859  161767 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0617 11:52:45.508134  161767 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-003661] and IPs [192.168.61.164 127.0.0.1 ::1]
	I0617 11:52:45.783772  161767 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0617 11:52:45.846672  161767 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0617 11:52:46.047604  161767 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0617 11:52:46.047711  161767 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 11:52:46.225448  161767 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 11:52:46.544626  161767 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 11:52:46.858925  161767 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 11:52:47.026061  161767 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 11:52:47.045588  161767 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 11:52:47.046755  161767 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 11:52:47.047884  161767 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 11:52:47.200410  161767 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 11:52:47.202318  161767 out.go:204]   - Booting up control plane ...
	I0617 11:52:47.202471  161767 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 11:52:47.217190  161767 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 11:52:47.221346  161767 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 11:52:47.226137  161767 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 11:52:47.233575  161767 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 11:53:27.226454  161767 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0617 11:53:27.227573  161767 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:53:27.227871  161767 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:53:32.228242  161767 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:53:32.228510  161767 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:53:42.228007  161767 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:53:42.228280  161767 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:54:02.227892  161767 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:54:02.228198  161767 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:54:42.229281  161767 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:54:42.229568  161767 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:54:42.229583  161767 kubeadm.go:309] 
	I0617 11:54:42.229673  161767 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0617 11:54:42.229748  161767 kubeadm.go:309] 		timed out waiting for the condition
	I0617 11:54:42.229759  161767 kubeadm.go:309] 
	I0617 11:54:42.229807  161767 kubeadm.go:309] 	This error is likely caused by:
	I0617 11:54:42.229862  161767 kubeadm.go:309] 		- The kubelet is not running
	I0617 11:54:42.230009  161767 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0617 11:54:42.230020  161767 kubeadm.go:309] 
	I0617 11:54:42.230192  161767 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0617 11:54:42.230256  161767 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0617 11:54:42.230306  161767 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0617 11:54:42.230318  161767 kubeadm.go:309] 
	I0617 11:54:42.230492  161767 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0617 11:54:42.230599  161767 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0617 11:54:42.230613  161767 kubeadm.go:309] 
	I0617 11:54:42.230744  161767 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0617 11:54:42.230857  161767 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0617 11:54:42.230946  161767 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0617 11:54:42.231041  161767 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0617 11:54:42.231052  161767 kubeadm.go:309] 
	I0617 11:54:42.231799  161767 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 11:54:42.231902  161767 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0617 11:54:42.231998  161767 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0617 11:54:42.232167  161767 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-003661] and IPs [192.168.61.164 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-003661] and IPs [192.168.61.164 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-003661] and IPs [192.168.61.164 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-003661] and IPs [192.168.61.164 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0617 11:54:42.232227  161767 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 11:54:42.688142  161767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:54:42.703713  161767 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 11:54:42.713879  161767 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 11:54:42.713896  161767 kubeadm.go:156] found existing configuration files:
	
	I0617 11:54:42.713926  161767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 11:54:42.723259  161767 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 11:54:42.723314  161767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 11:54:42.732883  161767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 11:54:42.742013  161767 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 11:54:42.742069  161767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 11:54:42.751654  161767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 11:54:42.760820  161767 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 11:54:42.760855  161767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 11:54:42.770307  161767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 11:54:42.779129  161767 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 11:54:42.779158  161767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 11:54:42.788495  161767 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 11:54:43.000491  161767 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 11:56:39.203166  161767 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0617 11:56:39.203288  161767 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0617 11:56:39.205158  161767 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0617 11:56:39.205236  161767 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 11:56:39.205307  161767 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 11:56:39.205436  161767 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 11:56:39.205563  161767 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 11:56:39.205636  161767 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 11:56:39.207150  161767 out.go:204]   - Generating certificates and keys ...
	I0617 11:56:39.207218  161767 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 11:56:39.207276  161767 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 11:56:39.207344  161767 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 11:56:39.207395  161767 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 11:56:39.207510  161767 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 11:56:39.207592  161767 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 11:56:39.207682  161767 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 11:56:39.207767  161767 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 11:56:39.207860  161767 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 11:56:39.207987  161767 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 11:56:39.208047  161767 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 11:56:39.208118  161767 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 11:56:39.208161  161767 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 11:56:39.208206  161767 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 11:56:39.208259  161767 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 11:56:39.208328  161767 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 11:56:39.208458  161767 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 11:56:39.208573  161767 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 11:56:39.208629  161767 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 11:56:39.208717  161767 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 11:56:39.210176  161767 out.go:204]   - Booting up control plane ...
	I0617 11:56:39.210264  161767 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 11:56:39.210349  161767 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 11:56:39.210430  161767 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 11:56:39.210530  161767 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 11:56:39.210678  161767 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 11:56:39.210731  161767 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0617 11:56:39.210788  161767 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:56:39.210955  161767 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:56:39.211020  161767 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:56:39.211175  161767 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:56:39.211236  161767 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:56:39.211389  161767 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:56:39.211450  161767 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:56:39.211632  161767 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:56:39.211706  161767 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 11:56:39.211890  161767 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 11:56:39.211903  161767 kubeadm.go:309] 
	I0617 11:56:39.211954  161767 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0617 11:56:39.211988  161767 kubeadm.go:309] 		timed out waiting for the condition
	I0617 11:56:39.211996  161767 kubeadm.go:309] 
	I0617 11:56:39.212024  161767 kubeadm.go:309] 	This error is likely caused by:
	I0617 11:56:39.212054  161767 kubeadm.go:309] 		- The kubelet is not running
	I0617 11:56:39.212150  161767 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0617 11:56:39.212158  161767 kubeadm.go:309] 
	I0617 11:56:39.212241  161767 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0617 11:56:39.212280  161767 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0617 11:56:39.212308  161767 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0617 11:56:39.212314  161767 kubeadm.go:309] 
	I0617 11:56:39.212403  161767 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0617 11:56:39.212481  161767 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0617 11:56:39.212491  161767 kubeadm.go:309] 
	I0617 11:56:39.212584  161767 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0617 11:56:39.212662  161767 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0617 11:56:39.212729  161767 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0617 11:56:39.212789  161767 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0617 11:56:39.212810  161767 kubeadm.go:309] 
	I0617 11:56:39.212850  161767 kubeadm.go:393] duration metric: took 3m55.460240214s to StartCluster
	I0617 11:56:39.212886  161767 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 11:56:39.212934  161767 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 11:56:39.257344  161767 cri.go:89] found id: ""
	I0617 11:56:39.257385  161767 logs.go:276] 0 containers: []
	W0617 11:56:39.257396  161767 logs.go:278] No container was found matching "kube-apiserver"
	I0617 11:56:39.257403  161767 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 11:56:39.257464  161767 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 11:56:39.295336  161767 cri.go:89] found id: ""
	I0617 11:56:39.295361  161767 logs.go:276] 0 containers: []
	W0617 11:56:39.295369  161767 logs.go:278] No container was found matching "etcd"
	I0617 11:56:39.295375  161767 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 11:56:39.295425  161767 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 11:56:39.332771  161767 cri.go:89] found id: ""
	I0617 11:56:39.332797  161767 logs.go:276] 0 containers: []
	W0617 11:56:39.332805  161767 logs.go:278] No container was found matching "coredns"
	I0617 11:56:39.332811  161767 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 11:56:39.332881  161767 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 11:56:39.372539  161767 cri.go:89] found id: ""
	I0617 11:56:39.372573  161767 logs.go:276] 0 containers: []
	W0617 11:56:39.372584  161767 logs.go:278] No container was found matching "kube-scheduler"
	I0617 11:56:39.372594  161767 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 11:56:39.372659  161767 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 11:56:39.411579  161767 cri.go:89] found id: ""
	I0617 11:56:39.411608  161767 logs.go:276] 0 containers: []
	W0617 11:56:39.411618  161767 logs.go:278] No container was found matching "kube-proxy"
	I0617 11:56:39.411626  161767 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 11:56:39.411693  161767 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 11:56:39.448186  161767 cri.go:89] found id: ""
	I0617 11:56:39.448221  161767 logs.go:276] 0 containers: []
	W0617 11:56:39.448232  161767 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 11:56:39.448239  161767 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 11:56:39.448300  161767 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 11:56:39.488022  161767 cri.go:89] found id: ""
	I0617 11:56:39.488055  161767 logs.go:276] 0 containers: []
	W0617 11:56:39.488066  161767 logs.go:278] No container was found matching "kindnet"
	I0617 11:56:39.488081  161767 logs.go:123] Gathering logs for describe nodes ...
	I0617 11:56:39.488098  161767 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 11:56:39.641180  161767 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 11:56:39.641211  161767 logs.go:123] Gathering logs for CRI-O ...
	I0617 11:56:39.641230  161767 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 11:56:39.746542  161767 logs.go:123] Gathering logs for container status ...
	I0617 11:56:39.746580  161767 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 11:56:39.787175  161767 logs.go:123] Gathering logs for kubelet ...
	I0617 11:56:39.787202  161767 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 11:56:39.838091  161767 logs.go:123] Gathering logs for dmesg ...
	I0617 11:56:39.838121  161767 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0617 11:56:39.852128  161767 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0617 11:56:39.852166  161767 out.go:239] * 
	* 
	W0617 11:56:39.852229  161767 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0617 11:56:39.852250  161767 out.go:239] * 
	* 
	W0617 11:56:39.853057  161767 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 11:56:39.856306  161767 out.go:177] 
	W0617 11:56:39.857546  161767 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0617 11:56:39.857599  161767 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0617 11:56:39.857617  161767 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0617 11:56:39.859724  161767 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-003661 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003661 -n old-k8s-version-003661
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003661 -n old-k8s-version-003661: exit status 6 (233.851023ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:56:40.130549  164885 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-003661" does not appear in /home/jenkins/minikube-integration/19084-112967/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-003661" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (281.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-152830 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-152830 --alsologtostderr -v=3: exit status 82 (2m0.525109695s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-152830"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:54:01.941180  163235 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:54:01.941520  163235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:54:01.941537  163235 out.go:304] Setting ErrFile to fd 2...
	I0617 11:54:01.941544  163235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:54:01.941841  163235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:54:01.942207  163235 out.go:298] Setting JSON to false
	I0617 11:54:01.942311  163235 mustload.go:65] Loading cluster: no-preload-152830
	I0617 11:54:01.942814  163235 config.go:182] Loaded profile config "no-preload-152830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:54:01.942925  163235 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/config.json ...
	I0617 11:54:01.943184  163235 mustload.go:65] Loading cluster: no-preload-152830
	I0617 11:54:01.943357  163235 config.go:182] Loaded profile config "no-preload-152830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:54:01.943397  163235 stop.go:39] StopHost: no-preload-152830
	I0617 11:54:01.943957  163235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:54:01.944047  163235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:54:01.958748  163235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39323
	I0617 11:54:01.959236  163235 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:54:01.959911  163235 main.go:141] libmachine: Using API Version  1
	I0617 11:54:01.959947  163235 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:54:01.960401  163235 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:54:01.962815  163235 out.go:177] * Stopping node "no-preload-152830"  ...
	I0617 11:54:01.964529  163235 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0617 11:54:01.964573  163235 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 11:54:01.964798  163235 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0617 11:54:01.964826  163235 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 11:54:01.967868  163235 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 11:54:01.968285  163235 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 12:52:51 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 11:54:01.968313  163235 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 11:54:01.968491  163235 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 11:54:01.968700  163235 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 11:54:01.968876  163235 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 11:54:01.969054  163235 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 11:54:02.082806  163235 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0617 11:54:02.146370  163235 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0617 11:54:02.208871  163235 main.go:141] libmachine: Stopping "no-preload-152830"...
	I0617 11:54:02.208902  163235 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 11:54:02.210684  163235 main.go:141] libmachine: (no-preload-152830) Calling .Stop
	I0617 11:54:02.214447  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 0/120
	I0617 11:54:03.215932  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 1/120
	I0617 11:54:04.217981  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 2/120
	I0617 11:54:05.219392  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 3/120
	I0617 11:54:06.220783  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 4/120
	I0617 11:54:07.223026  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 5/120
	I0617 11:54:08.224587  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 6/120
	I0617 11:54:09.226092  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 7/120
	I0617 11:54:10.227541  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 8/120
	I0617 11:54:11.228922  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 9/120
	I0617 11:54:12.230989  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 10/120
	I0617 11:54:13.232256  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 11/120
	I0617 11:54:14.233880  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 12/120
	I0617 11:54:15.235126  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 13/120
	I0617 11:54:16.236488  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 14/120
	I0617 11:54:17.238629  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 15/120
	I0617 11:54:18.240022  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 16/120
	I0617 11:54:19.241590  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 17/120
	I0617 11:54:20.242809  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 18/120
	I0617 11:54:21.244167  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 19/120
	I0617 11:54:22.246079  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 20/120
	I0617 11:54:23.247409  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 21/120
	I0617 11:54:24.248686  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 22/120
	I0617 11:54:25.249901  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 23/120
	I0617 11:54:26.251130  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 24/120
	I0617 11:54:27.253124  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 25/120
	I0617 11:54:28.254639  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 26/120
	I0617 11:54:29.255952  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 27/120
	I0617 11:54:30.257313  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 28/120
	I0617 11:54:31.258778  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 29/120
	I0617 11:54:32.260766  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 30/120
	I0617 11:54:33.262497  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 31/120
	I0617 11:54:34.263953  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 32/120
	I0617 11:54:35.265276  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 33/120
	I0617 11:54:36.266783  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 34/120
	I0617 11:54:37.268854  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 35/120
	I0617 11:54:38.270303  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 36/120
	I0617 11:54:39.271790  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 37/120
	I0617 11:54:40.273164  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 38/120
	I0617 11:54:41.274697  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 39/120
	I0617 11:54:42.276960  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 40/120
	I0617 11:54:43.278038  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 41/120
	I0617 11:54:44.279345  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 42/120
	I0617 11:54:45.280644  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 43/120
	I0617 11:54:46.281931  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 44/120
	I0617 11:54:47.283731  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 45/120
	I0617 11:54:48.284837  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 46/120
	I0617 11:54:49.286147  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 47/120
	I0617 11:54:50.287477  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 48/120
	I0617 11:54:51.288736  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 49/120
	I0617 11:54:52.290862  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 50/120
	I0617 11:54:53.292078  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 51/120
	I0617 11:54:54.293385  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 52/120
	I0617 11:54:55.294649  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 53/120
	I0617 11:54:56.296021  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 54/120
	I0617 11:54:57.298160  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 55/120
	I0617 11:54:58.299534  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 56/120
	I0617 11:54:59.300905  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 57/120
	I0617 11:55:00.302325  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 58/120
	I0617 11:55:01.303876  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 59/120
	I0617 11:55:02.306021  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 60/120
	I0617 11:55:03.307505  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 61/120
	I0617 11:55:04.309055  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 62/120
	I0617 11:55:05.310557  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 63/120
	I0617 11:55:06.312245  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 64/120
	I0617 11:55:07.314291  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 65/120
	I0617 11:55:08.315955  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 66/120
	I0617 11:55:09.317379  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 67/120
	I0617 11:55:10.319823  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 68/120
	I0617 11:55:11.321994  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 69/120
	I0617 11:55:12.324279  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 70/120
	I0617 11:55:13.325538  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 71/120
	I0617 11:55:14.326988  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 72/120
	I0617 11:55:15.328351  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 73/120
	I0617 11:55:16.329692  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 74/120
	I0617 11:55:17.331785  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 75/120
	I0617 11:55:18.333248  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 76/120
	I0617 11:55:19.334561  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 77/120
	I0617 11:55:20.336103  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 78/120
	I0617 11:55:21.338096  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 79/120
	I0617 11:55:22.340331  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 80/120
	I0617 11:55:23.342253  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 81/120
	I0617 11:55:24.343780  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 82/120
	I0617 11:55:25.345226  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 83/120
	I0617 11:55:26.346494  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 84/120
	I0617 11:55:27.348288  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 85/120
	I0617 11:55:28.349773  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 86/120
	I0617 11:55:29.351068  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 87/120
	I0617 11:55:30.352335  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 88/120
	I0617 11:55:31.353572  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 89/120
	I0617 11:55:32.355634  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 90/120
	I0617 11:55:33.357153  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 91/120
	I0617 11:55:34.358979  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 92/120
	I0617 11:55:35.360259  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 93/120
	I0617 11:55:36.361919  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 94/120
	I0617 11:55:37.363755  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 95/120
	I0617 11:55:38.366033  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 96/120
	I0617 11:55:39.367272  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 97/120
	I0617 11:55:40.368902  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 98/120
	I0617 11:55:41.370573  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 99/120
	I0617 11:55:42.372383  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 100/120
	I0617 11:55:43.374616  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 101/120
	I0617 11:55:44.375998  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 102/120
	I0617 11:55:45.377395  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 103/120
	I0617 11:55:46.378954  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 104/120
	I0617 11:55:47.380903  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 105/120
	I0617 11:55:48.382158  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 106/120
	I0617 11:55:49.383516  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 107/120
	I0617 11:55:50.385042  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 108/120
	I0617 11:55:51.386426  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 109/120
	I0617 11:55:52.388662  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 110/120
	I0617 11:55:53.390019  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 111/120
	I0617 11:55:54.391481  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 112/120
	I0617 11:55:55.392962  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 113/120
	I0617 11:55:56.395221  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 114/120
	I0617 11:55:57.397296  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 115/120
	I0617 11:55:58.398883  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 116/120
	I0617 11:55:59.400206  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 117/120
	I0617 11:56:00.401572  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 118/120
	I0617 11:56:01.402737  163235 main.go:141] libmachine: (no-preload-152830) Waiting for machine to stop 119/120
	I0617 11:56:02.404168  163235 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0617 11:56:02.404224  163235 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0617 11:56:02.406195  163235 out.go:177] 
	W0617 11:56:02.407372  163235 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0617 11:56:02.407396  163235 out.go:239] * 
	* 
	W0617 11:56:02.410274  163235 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 11:56:02.411518  163235 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-152830 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152830 -n no-preload-152830
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152830 -n no-preload-152830: exit status 3 (18.519994057s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:56:20.935731  164468 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.173:22: connect: no route to host
	E0617 11:56:20.935750  164468 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.173:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-152830" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-136195 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-136195 --alsologtostderr -v=3: exit status 82 (2m0.672225347s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-136195"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:54:11.780930  163334 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:54:11.781217  163334 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:54:11.781230  163334 out.go:304] Setting ErrFile to fd 2...
	I0617 11:54:11.781237  163334 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:54:11.781491  163334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:54:11.781795  163334 out.go:298] Setting JSON to false
	I0617 11:54:11.781896  163334 mustload.go:65] Loading cluster: embed-certs-136195
	I0617 11:54:11.782356  163334 config.go:182] Loaded profile config "embed-certs-136195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:54:11.782456  163334 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/config.json ...
	I0617 11:54:11.782689  163334 mustload.go:65] Loading cluster: embed-certs-136195
	I0617 11:54:11.782853  163334 config.go:182] Loaded profile config "embed-certs-136195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:54:11.782897  163334 stop.go:39] StopHost: embed-certs-136195
	I0617 11:54:11.783535  163334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:54:11.783613  163334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:54:11.798469  163334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40169
	I0617 11:54:11.799099  163334 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:54:11.799755  163334 main.go:141] libmachine: Using API Version  1
	I0617 11:54:11.799777  163334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:54:11.800109  163334 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:54:11.802377  163334 out.go:177] * Stopping node "embed-certs-136195"  ...
	I0617 11:54:11.803704  163334 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0617 11:54:11.803741  163334 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 11:54:11.804009  163334 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0617 11:54:11.804044  163334 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 11:54:11.806705  163334 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 11:54:11.807083  163334 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 12:53:16 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 11:54:11.807106  163334 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 11:54:11.807295  163334 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 11:54:11.807520  163334 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 11:54:11.807693  163334 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 11:54:11.807851  163334 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 11:54:11.915592  163334 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0617 11:54:11.976048  163334 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0617 11:54:12.015904  163334 main.go:141] libmachine: Stopping "embed-certs-136195"...
	I0617 11:54:12.015945  163334 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 11:54:12.017573  163334 main.go:141] libmachine: (embed-certs-136195) Calling .Stop
	I0617 11:54:12.020750  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 0/120
	I0617 11:54:13.022363  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 1/120
	I0617 11:54:14.023815  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 2/120
	I0617 11:54:15.025172  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 3/120
	I0617 11:54:16.026509  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 4/120
	I0617 11:54:17.028969  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 5/120
	I0617 11:54:18.030593  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 6/120
	I0617 11:54:19.031921  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 7/120
	I0617 11:54:20.034121  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 8/120
	I0617 11:54:21.035673  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 9/120
	I0617 11:54:22.037949  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 10/120
	I0617 11:54:23.039242  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 11/120
	I0617 11:54:24.040529  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 12/120
	I0617 11:54:25.041866  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 13/120
	I0617 11:54:26.043395  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 14/120
	I0617 11:54:27.045248  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 15/120
	I0617 11:54:28.046489  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 16/120
	I0617 11:54:29.047804  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 17/120
	I0617 11:54:30.049780  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 18/120
	I0617 11:54:31.051378  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 19/120
	I0617 11:54:32.052755  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 20/120
	I0617 11:54:33.054059  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 21/120
	I0617 11:54:34.055453  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 22/120
	I0617 11:54:35.056789  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 23/120
	I0617 11:54:36.058255  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 24/120
	I0617 11:54:37.060323  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 25/120
	I0617 11:54:38.061725  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 26/120
	I0617 11:54:39.063231  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 27/120
	I0617 11:54:40.064608  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 28/120
	I0617 11:54:41.066054  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 29/120
	I0617 11:54:42.068185  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 30/120
	I0617 11:54:43.069744  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 31/120
	I0617 11:54:44.071018  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 32/120
	I0617 11:54:45.072429  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 33/120
	I0617 11:54:46.073648  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 34/120
	I0617 11:54:47.075880  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 35/120
	I0617 11:54:48.077383  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 36/120
	I0617 11:54:49.078655  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 37/120
	I0617 11:54:50.080010  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 38/120
	I0617 11:54:51.081334  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 39/120
	I0617 11:54:52.083378  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 40/120
	I0617 11:54:53.084768  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 41/120
	I0617 11:54:54.086302  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 42/120
	I0617 11:54:55.087578  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 43/120
	I0617 11:54:56.089085  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 44/120
	I0617 11:54:57.091025  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 45/120
	I0617 11:54:58.092439  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 46/120
	I0617 11:54:59.093776  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 47/120
	I0617 11:55:00.095210  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 48/120
	I0617 11:55:01.096479  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 49/120
	I0617 11:55:02.097900  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 50/120
	I0617 11:55:03.099354  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 51/120
	I0617 11:55:04.101063  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 52/120
	I0617 11:55:05.102481  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 53/120
	I0617 11:55:06.103799  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 54/120
	I0617 11:55:07.106179  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 55/120
	I0617 11:55:08.107609  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 56/120
	I0617 11:55:09.109087  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 57/120
	I0617 11:55:10.110705  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 58/120
	I0617 11:55:11.112042  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 59/120
	I0617 11:55:12.114348  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 60/120
	I0617 11:55:13.116014  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 61/120
	I0617 11:55:14.117374  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 62/120
	I0617 11:55:15.118835  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 63/120
	I0617 11:55:16.120123  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 64/120
	I0617 11:55:17.122429  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 65/120
	I0617 11:55:18.123757  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 66/120
	I0617 11:55:19.125116  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 67/120
	I0617 11:55:20.126407  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 68/120
	I0617 11:55:21.127752  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 69/120
	I0617 11:55:22.129775  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 70/120
	I0617 11:55:23.131227  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 71/120
	I0617 11:55:24.133101  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 72/120
	I0617 11:55:25.134519  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 73/120
	I0617 11:55:26.136255  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 74/120
	I0617 11:55:27.138287  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 75/120
	I0617 11:55:28.139632  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 76/120
	I0617 11:55:29.141120  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 77/120
	I0617 11:55:30.142574  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 78/120
	I0617 11:55:31.144143  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 79/120
	I0617 11:55:32.146417  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 80/120
	I0617 11:55:33.147845  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 81/120
	I0617 11:55:34.150302  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 82/120
	I0617 11:55:35.151801  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 83/120
	I0617 11:55:36.154387  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 84/120
	I0617 11:55:37.156211  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 85/120
	I0617 11:55:38.157847  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 86/120
	I0617 11:55:39.159355  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 87/120
	I0617 11:55:40.161040  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 88/120
	I0617 11:55:41.162456  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 89/120
	I0617 11:55:42.164063  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 90/120
	I0617 11:55:43.166194  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 91/120
	I0617 11:55:44.167510  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 92/120
	I0617 11:55:45.169864  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 93/120
	I0617 11:55:46.171396  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 94/120
	I0617 11:55:47.173264  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 95/120
	I0617 11:55:48.174679  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 96/120
	I0617 11:55:49.176243  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 97/120
	I0617 11:55:50.177567  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 98/120
	I0617 11:55:51.179202  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 99/120
	I0617 11:55:52.181377  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 100/120
	I0617 11:55:53.182947  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 101/120
	I0617 11:55:54.184340  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 102/120
	I0617 11:55:55.185672  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 103/120
	I0617 11:55:56.187066  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 104/120
	I0617 11:55:57.189019  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 105/120
	I0617 11:55:58.191170  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 106/120
	I0617 11:55:59.380975  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 107/120
	I0617 11:56:00.382212  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 108/120
	I0617 11:56:01.383814  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 109/120
	I0617 11:56:02.385928  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 110/120
	I0617 11:56:03.387395  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 111/120
	I0617 11:56:04.388902  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 112/120
	I0617 11:56:05.390918  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 113/120
	I0617 11:56:06.392198  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 114/120
	I0617 11:56:07.394155  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 115/120
	I0617 11:56:08.395794  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 116/120
	I0617 11:56:09.397026  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 117/120
	I0617 11:56:10.398657  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 118/120
	I0617 11:56:11.400523  163334 main.go:141] libmachine: (embed-certs-136195) Waiting for machine to stop 119/120
	I0617 11:56:12.401769  163334 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0617 11:56:12.401837  163334 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0617 11:56:12.403741  163334 out.go:177] 
	W0617 11:56:12.404991  163334 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0617 11:56:12.405014  163334 out.go:239] * 
	* 
	W0617 11:56:12.407533  163334 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 11:56:12.408848  163334 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-136195 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-136195 -n embed-certs-136195
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-136195 -n embed-certs-136195: exit status 3 (18.511649157s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:56:30.919825  164541 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.199:22: connect: no route to host
	E0617 11:56:30.919849  164541 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.199:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-136195" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152830 -n no-preload-152830
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152830 -n no-preload-152830: exit status 3 (3.168800797s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:56:24.103826  164618 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.173:22: connect: no route to host
	E0617 11:56:24.103852  164618 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.173:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-152830 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-152830 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151522373s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.173:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-152830 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152830 -n no-preload-152830
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152830 -n no-preload-152830: exit status 3 (3.064290478s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:56:33.319862  164733 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.173:22: connect: no route to host
	E0617 11:56:33.319886  164733 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.173:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-152830" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-136195 -n embed-certs-136195
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-136195 -n embed-certs-136195: exit status 3 (3.167841568s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:56:34.087933  164763 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.199:22: connect: no route to host
	E0617 11:56:34.087973  164763 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.199:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-136195 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-136195 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155394943s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.199:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-136195 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-136195 -n embed-certs-136195
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-136195 -n embed-certs-136195: exit status 3 (3.058693859s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:56:43.303890  164938 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.199:22: connect: no route to host
	E0617 11:56:43.303913  164938 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.199:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-136195" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-003661 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-003661 create -f testdata/busybox.yaml: exit status 1 (46.207408ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-003661" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-003661 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003661 -n old-k8s-version-003661
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003661 -n old-k8s-version-003661: exit status 6 (238.079554ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:56:40.420341  164925 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-003661" does not appear in /home/jenkins/minikube-integration/19084-112967/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-003661" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003661 -n old-k8s-version-003661
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003661 -n old-k8s-version-003661: exit status 6 (219.318924ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:56:40.640103  164985 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-003661" does not appear in /home/jenkins/minikube-integration/19084-112967/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-003661" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (95.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-003661 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-003661 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m35.456737833s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-003661 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-003661 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-003661 describe deploy/metrics-server -n kube-system: exit status 1 (45.199074ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-003661" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-003661 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003661 -n old-k8s-version-003661
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003661 -n old-k8s-version-003661: exit status 6 (215.652478ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:58:16.358488  165569 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-003661" does not appear in /home/jenkins/minikube-integration/19084-112967/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-003661" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (95.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-991309 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-991309 --alsologtostderr -v=3: exit status 82 (2m0.502343596s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-991309"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:57:05.941160  165276 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:57:05.941299  165276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:57:05.941310  165276 out.go:304] Setting ErrFile to fd 2...
	I0617 11:57:05.941316  165276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:57:05.941526  165276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:57:05.941901  165276 out.go:298] Setting JSON to false
	I0617 11:57:05.942038  165276 mustload.go:65] Loading cluster: default-k8s-diff-port-991309
	I0617 11:57:05.943215  165276 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:57:05.943348  165276 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/config.json ...
	I0617 11:57:05.943660  165276 mustload.go:65] Loading cluster: default-k8s-diff-port-991309
	I0617 11:57:05.943843  165276 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:57:05.943886  165276 stop.go:39] StopHost: default-k8s-diff-port-991309
	I0617 11:57:05.944491  165276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:57:05.944553  165276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:57:05.959243  165276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35189
	I0617 11:57:05.959739  165276 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:57:05.960342  165276 main.go:141] libmachine: Using API Version  1
	I0617 11:57:05.960366  165276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:57:05.960715  165276 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:57:05.963170  165276 out.go:177] * Stopping node "default-k8s-diff-port-991309"  ...
	I0617 11:57:05.964538  165276 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0617 11:57:05.964574  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 11:57:05.964832  165276 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0617 11:57:05.964857  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 11:57:05.967923  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 11:57:05.968444  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 12:56:14 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 11:57:05.968481  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 11:57:05.968782  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 11:57:05.968984  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 11:57:05.969203  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 11:57:05.969365  165276 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 11:57:06.058976  165276 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0617 11:57:06.128177  165276 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0617 11:57:06.194714  165276 main.go:141] libmachine: Stopping "default-k8s-diff-port-991309"...
	I0617 11:57:06.194754  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 11:57:06.196528  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Stop
	I0617 11:57:06.199975  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 0/120
	I0617 11:57:07.201559  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 1/120
	I0617 11:57:08.203033  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 2/120
	I0617 11:57:09.204467  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 3/120
	I0617 11:57:10.205924  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 4/120
	I0617 11:57:11.208176  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 5/120
	I0617 11:57:12.209897  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 6/120
	I0617 11:57:13.211556  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 7/120
	I0617 11:57:14.213013  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 8/120
	I0617 11:57:15.214760  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 9/120
	I0617 11:57:16.216998  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 10/120
	I0617 11:57:17.218530  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 11/120
	I0617 11:57:18.219969  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 12/120
	I0617 11:57:19.221647  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 13/120
	I0617 11:57:20.223026  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 14/120
	I0617 11:57:21.225033  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 15/120
	I0617 11:57:22.226404  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 16/120
	I0617 11:57:23.227909  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 17/120
	I0617 11:57:24.229454  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 18/120
	I0617 11:57:25.231050  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 19/120
	I0617 11:57:26.233285  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 20/120
	I0617 11:57:27.234724  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 21/120
	I0617 11:57:28.236114  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 22/120
	I0617 11:57:29.237460  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 23/120
	I0617 11:57:30.238963  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 24/120
	I0617 11:57:31.241015  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 25/120
	I0617 11:57:32.242271  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 26/120
	I0617 11:57:33.243857  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 27/120
	I0617 11:57:34.245283  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 28/120
	I0617 11:57:35.246797  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 29/120
	I0617 11:57:36.248883  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 30/120
	I0617 11:57:37.250359  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 31/120
	I0617 11:57:38.251648  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 32/120
	I0617 11:57:39.253214  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 33/120
	I0617 11:57:40.254373  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 34/120
	I0617 11:57:41.256445  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 35/120
	I0617 11:57:42.257885  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 36/120
	I0617 11:57:43.259708  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 37/120
	I0617 11:57:44.261015  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 38/120
	I0617 11:57:45.262407  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 39/120
	I0617 11:57:46.264561  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 40/120
	I0617 11:57:47.266169  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 41/120
	I0617 11:57:48.267548  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 42/120
	I0617 11:57:49.268941  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 43/120
	I0617 11:57:50.270259  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 44/120
	I0617 11:57:51.272551  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 45/120
	I0617 11:57:52.274193  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 46/120
	I0617 11:57:53.275716  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 47/120
	I0617 11:57:54.278126  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 48/120
	I0617 11:57:55.279474  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 49/120
	I0617 11:57:56.281669  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 50/120
	I0617 11:57:57.283081  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 51/120
	I0617 11:57:58.284459  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 52/120
	I0617 11:57:59.286071  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 53/120
	I0617 11:58:00.287311  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 54/120
	I0617 11:58:01.289661  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 55/120
	I0617 11:58:02.291365  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 56/120
	I0617 11:58:03.292988  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 57/120
	I0617 11:58:04.294362  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 58/120
	I0617 11:58:05.295843  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 59/120
	I0617 11:58:06.298109  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 60/120
	I0617 11:58:07.300036  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 61/120
	I0617 11:58:08.301406  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 62/120
	I0617 11:58:09.302939  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 63/120
	I0617 11:58:10.304230  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 64/120
	I0617 11:58:11.306012  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 65/120
	I0617 11:58:12.307552  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 66/120
	I0617 11:58:13.308855  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 67/120
	I0617 11:58:14.310365  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 68/120
	I0617 11:58:15.311941  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 69/120
	I0617 11:58:16.313742  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 70/120
	I0617 11:58:17.314828  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 71/120
	I0617 11:58:18.316278  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 72/120
	I0617 11:58:19.317766  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 73/120
	I0617 11:58:20.319185  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 74/120
	I0617 11:58:21.321234  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 75/120
	I0617 11:58:22.322718  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 76/120
	I0617 11:58:23.324200  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 77/120
	I0617 11:58:24.325567  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 78/120
	I0617 11:58:25.327119  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 79/120
	I0617 11:58:26.329565  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 80/120
	I0617 11:58:27.331104  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 81/120
	I0617 11:58:28.332665  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 82/120
	I0617 11:58:29.334047  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 83/120
	I0617 11:58:30.335545  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 84/120
	I0617 11:58:31.337580  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 85/120
	I0617 11:58:32.339037  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 86/120
	I0617 11:58:33.340419  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 87/120
	I0617 11:58:34.342004  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 88/120
	I0617 11:58:35.343316  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 89/120
	I0617 11:58:36.345580  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 90/120
	I0617 11:58:37.347050  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 91/120
	I0617 11:58:38.348529  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 92/120
	I0617 11:58:39.349886  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 93/120
	I0617 11:58:40.351354  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 94/120
	I0617 11:58:41.353454  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 95/120
	I0617 11:58:42.355005  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 96/120
	I0617 11:58:43.356342  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 97/120
	I0617 11:58:44.357842  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 98/120
	I0617 11:58:45.359143  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 99/120
	I0617 11:58:46.360551  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 100/120
	I0617 11:58:47.361920  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 101/120
	I0617 11:58:48.363400  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 102/120
	I0617 11:58:49.364901  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 103/120
	I0617 11:58:50.366420  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 104/120
	I0617 11:58:51.368538  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 105/120
	I0617 11:58:52.370121  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 106/120
	I0617 11:58:53.371613  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 107/120
	I0617 11:58:54.373077  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 108/120
	I0617 11:58:55.374334  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 109/120
	I0617 11:58:56.376623  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 110/120
	I0617 11:58:57.378192  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 111/120
	I0617 11:58:58.379475  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 112/120
	I0617 11:58:59.380963  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 113/120
	I0617 11:59:00.382480  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 114/120
	I0617 11:59:01.384752  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 115/120
	I0617 11:59:02.386626  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 116/120
	I0617 11:59:03.387882  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 117/120
	I0617 11:59:04.389387  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 118/120
	I0617 11:59:05.390843  165276 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for machine to stop 119/120
	I0617 11:59:06.392324  165276 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0617 11:59:06.392406  165276 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0617 11:59:06.394397  165276 out.go:177] 
	W0617 11:59:06.395720  165276 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0617 11:59:06.395756  165276 out.go:239] * 
	* 
	W0617 11:59:06.398281  165276 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 11:59:06.399522  165276 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-991309 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309: exit status 3 (18.598721221s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:59:24.999821  165882 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.125:22: connect: no route to host
	E0617 11:59:24.999854  165882 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.125:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-991309" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (701.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-003661 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0617 11:58:57.397541  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-003661 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m38.421563188s)

                                                
                                                
-- stdout --
	* [old-k8s-version-003661] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19084
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-003661" primary control-plane node in "old-k8s-version-003661" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-003661" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:58:17.975404  165698 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:58:17.975654  165698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:58:17.975664  165698 out.go:304] Setting ErrFile to fd 2...
	I0617 11:58:17.975668  165698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:58:17.975860  165698 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:58:17.976409  165698 out.go:298] Setting JSON to false
	I0617 11:58:17.977343  165698 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6045,"bootTime":1718619453,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:58:17.977401  165698 start.go:139] virtualization: kvm guest
	I0617 11:58:17.979688  165698 out.go:177] * [old-k8s-version-003661] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:58:17.981080  165698 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:58:17.981117  165698 notify.go:220] Checking for updates...
	I0617 11:58:17.982637  165698 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:58:17.984075  165698 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:58:17.985387  165698 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:58:17.986489  165698 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:58:17.987967  165698 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:58:17.989702  165698 config.go:182] Loaded profile config "old-k8s-version-003661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0617 11:58:17.990059  165698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:58:17.990100  165698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:58:18.005092  165698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46327
	I0617 11:58:18.005489  165698 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:58:18.006042  165698 main.go:141] libmachine: Using API Version  1
	I0617 11:58:18.006061  165698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:58:18.006404  165698 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:58:18.006581  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 11:58:18.008448  165698 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0617 11:58:18.009570  165698 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:58:18.009894  165698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:58:18.009933  165698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:58:18.024703  165698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
	I0617 11:58:18.025094  165698 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:58:18.025546  165698 main.go:141] libmachine: Using API Version  1
	I0617 11:58:18.025570  165698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:58:18.025883  165698 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:58:18.026085  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 11:58:18.060260  165698 out.go:177] * Using the kvm2 driver based on existing profile
	I0617 11:58:18.061451  165698 start.go:297] selected driver: kvm2
	I0617 11:58:18.061468  165698 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.164 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:58:18.061579  165698 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:58:18.062217  165698 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:58:18.062303  165698 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 11:58:18.077049  165698 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 11:58:18.077390  165698 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:58:18.077425  165698 cni.go:84] Creating CNI manager for ""
	I0617 11:58:18.077432  165698 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:58:18.077472  165698 start.go:340] cluster config:
	{Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.164 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:58:18.077556  165698 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:58:18.080072  165698 out.go:177] * Starting "old-k8s-version-003661" primary control-plane node in "old-k8s-version-003661" cluster
	I0617 11:58:18.081064  165698 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0617 11:58:18.081097  165698 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0617 11:58:18.081111  165698 cache.go:56] Caching tarball of preloaded images
	I0617 11:58:18.081181  165698 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:58:18.081192  165698 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0617 11:58:18.081283  165698 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/config.json ...
	I0617 11:58:18.081451  165698 start.go:360] acquireMachinesLock for old-k8s-version-003661: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 12:01:30.728529  165698 start.go:364] duration metric: took 3m12.647041864s to acquireMachinesLock for "old-k8s-version-003661"
	I0617 12:01:30.728602  165698 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:30.728613  165698 fix.go:54] fixHost starting: 
	I0617 12:01:30.729036  165698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:30.729090  165698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:30.746528  165698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0617 12:01:30.746982  165698 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:30.747493  165698 main.go:141] libmachine: Using API Version  1
	I0617 12:01:30.747516  165698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:30.747847  165698 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:30.748060  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:30.748186  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetState
	I0617 12:01:30.750035  165698 fix.go:112] recreateIfNeeded on old-k8s-version-003661: state=Stopped err=<nil>
	I0617 12:01:30.750072  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	W0617 12:01:30.750206  165698 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:30.752196  165698 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-003661" ...
	I0617 12:01:30.753437  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .Start
	I0617 12:01:30.753608  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring networks are active...
	I0617 12:01:30.754272  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring network default is active
	I0617 12:01:30.754600  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring network mk-old-k8s-version-003661 is active
	I0617 12:01:30.754967  165698 main.go:141] libmachine: (old-k8s-version-003661) Getting domain xml...
	I0617 12:01:30.755739  165698 main.go:141] libmachine: (old-k8s-version-003661) Creating domain...
	I0617 12:01:32.029080  165698 main.go:141] libmachine: (old-k8s-version-003661) Waiting to get IP...
	I0617 12:01:32.029902  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.030401  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.030477  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.030384  166594 retry.go:31] will retry after 191.846663ms: waiting for machine to come up
	I0617 12:01:32.223912  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.224300  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.224328  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.224276  166594 retry.go:31] will retry after 341.806498ms: waiting for machine to come up
	I0617 12:01:32.568066  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.568648  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.568682  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.568575  166594 retry.go:31] will retry after 359.779948ms: waiting for machine to come up
	I0617 12:01:32.930210  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.930652  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.930675  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.930604  166594 retry.go:31] will retry after 548.549499ms: waiting for machine to come up
	I0617 12:01:33.480493  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:33.480965  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:33.481004  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:33.480931  166594 retry.go:31] will retry after 636.044066ms: waiting for machine to come up
	I0617 12:01:34.118880  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:34.119361  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:34.119394  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:34.119299  166594 retry.go:31] will retry after 637.085777ms: waiting for machine to come up
	I0617 12:01:34.757614  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:34.758097  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:34.758126  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:34.758051  166594 retry.go:31] will retry after 921.652093ms: waiting for machine to come up
	I0617 12:01:35.681846  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:35.682324  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:35.682351  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:35.682269  166594 retry.go:31] will retry after 1.1106801s: waiting for machine to come up
	I0617 12:01:36.794411  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:36.794845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:36.794869  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:36.794793  166594 retry.go:31] will retry after 1.323395845s: waiting for machine to come up
	I0617 12:01:38.119805  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:38.297858  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:38.297905  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:38.120293  166594 retry.go:31] will retry after 1.769592858s: waiting for machine to come up
	I0617 12:01:39.892495  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:39.893035  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:39.893065  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:39.892948  166594 retry.go:31] will retry after 1.954570801s: waiting for machine to come up
	I0617 12:01:41.849587  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:41.850111  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:41.850140  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:41.850067  166594 retry.go:31] will retry after 3.44879626s: waiting for machine to come up
	I0617 12:01:45.300413  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:45.300848  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:45.300878  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:45.300794  166594 retry.go:31] will retry after 3.892148485s: waiting for machine to come up
	I0617 12:01:49.197189  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.197671  165698 main.go:141] libmachine: (old-k8s-version-003661) Found IP for machine: 192.168.61.164
	I0617 12:01:49.197697  165698 main.go:141] libmachine: (old-k8s-version-003661) Reserving static IP address...
	I0617 12:01:49.197714  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has current primary IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.198147  165698 main.go:141] libmachine: (old-k8s-version-003661) Reserved static IP address: 192.168.61.164
	I0617 12:01:49.198175  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "old-k8s-version-003661", mac: "52:54:00:76:66:a0", ip: "192.168.61.164"} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.198185  165698 main.go:141] libmachine: (old-k8s-version-003661) Waiting for SSH to be available...
	I0617 12:01:49.198217  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | skip adding static IP to network mk-old-k8s-version-003661 - found existing host DHCP lease matching {name: "old-k8s-version-003661", mac: "52:54:00:76:66:a0", ip: "192.168.61.164"}
	I0617 12:01:49.198227  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Getting to WaitForSSH function...
	I0617 12:01:49.200478  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.200907  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.200935  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.201088  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Using SSH client type: external
	I0617 12:01:49.201116  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa (-rw-------)
	I0617 12:01:49.201154  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:01:49.201169  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | About to run SSH command:
	I0617 12:01:49.201183  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | exit 0
	I0617 12:01:49.323763  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | SSH cmd err, output: <nil>: 
	I0617 12:01:49.324127  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetConfigRaw
	I0617 12:01:49.324835  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:49.327217  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.327628  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.327660  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.327891  165698 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/config.json ...
	I0617 12:01:49.328097  165698 machine.go:94] provisionDockerMachine start ...
	I0617 12:01:49.328120  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:49.328365  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.330587  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.330992  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.331033  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.331160  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.331324  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.331490  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.331637  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.331824  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.332037  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.332049  165698 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:01:49.432170  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:01:49.432201  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.432498  165698 buildroot.go:166] provisioning hostname "old-k8s-version-003661"
	I0617 12:01:49.432524  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.432730  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.435845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.436276  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.436317  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.436507  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.436708  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.436909  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.437074  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.437289  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.437496  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.437510  165698 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-003661 && echo "old-k8s-version-003661" | sudo tee /etc/hostname
	I0617 12:01:49.550158  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-003661
	
	I0617 12:01:49.550187  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.553141  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.553509  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.553539  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.553737  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.553943  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.554141  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.554298  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.554520  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.554759  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.554787  165698 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-003661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-003661/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-003661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:01:49.661049  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:49.661079  165698 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:01:49.661106  165698 buildroot.go:174] setting up certificates
	I0617 12:01:49.661115  165698 provision.go:84] configureAuth start
	I0617 12:01:49.661124  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.661452  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:49.664166  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.664561  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.664591  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.664723  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.666845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.667114  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.667158  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.667287  165698 provision.go:143] copyHostCerts
	I0617 12:01:49.667377  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:01:49.667387  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:01:49.667440  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:01:49.667561  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:01:49.667571  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:01:49.667594  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:01:49.667649  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:01:49.667656  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:01:49.667674  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:01:49.667722  165698 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-003661 san=[127.0.0.1 192.168.61.164 localhost minikube old-k8s-version-003661]
	I0617 12:01:49.853671  165698 provision.go:177] copyRemoteCerts
	I0617 12:01:49.853736  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:01:49.853767  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.856171  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.856540  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.856577  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.856737  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.857071  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.857220  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.857360  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:49.938626  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:01:49.964401  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0617 12:01:49.988397  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0617 12:01:50.013356  165698 provision.go:87] duration metric: took 352.227211ms to configureAuth
	I0617 12:01:50.013382  165698 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:01:50.013581  165698 config.go:182] Loaded profile config "old-k8s-version-003661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0617 12:01:50.013689  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.016168  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.016514  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.016548  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.016657  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.016847  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.017025  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.017152  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.017300  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:50.017483  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:50.017505  165698 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:01:50.280037  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:01:50.280065  165698 machine.go:97] duration metric: took 951.954687ms to provisionDockerMachine
	I0617 12:01:50.280076  165698 start.go:293] postStartSetup for "old-k8s-version-003661" (driver="kvm2")
	I0617 12:01:50.280086  165698 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:01:50.280102  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.280467  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:01:50.280506  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.283318  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.283657  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.283684  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.283874  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.284106  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.284279  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.284402  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.362452  165698 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:01:50.366699  165698 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:01:50.366726  165698 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:01:50.366788  165698 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:01:50.366878  165698 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:01:50.367004  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:01:50.376706  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:50.399521  165698 start.go:296] duration metric: took 119.43167ms for postStartSetup
	I0617 12:01:50.399558  165698 fix.go:56] duration metric: took 19.670946478s for fixHost
	I0617 12:01:50.399578  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.402079  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.402465  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.402500  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.402649  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.402835  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.402994  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.403138  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.403321  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:50.403529  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:50.403541  165698 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0617 12:01:50.500267  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625710.471154465
	
	I0617 12:01:50.500294  165698 fix.go:216] guest clock: 1718625710.471154465
	I0617 12:01:50.500304  165698 fix.go:229] Guest: 2024-06-17 12:01:50.471154465 +0000 UTC Remote: 2024-06-17 12:01:50.399561534 +0000 UTC m=+212.458541959 (delta=71.592931ms)
	I0617 12:01:50.500350  165698 fix.go:200] guest clock delta is within tolerance: 71.592931ms
	I0617 12:01:50.500355  165698 start.go:83] releasing machines lock for "old-k8s-version-003661", held for 19.771784344s
	I0617 12:01:50.500380  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.500648  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:50.503346  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.503749  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.503776  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.503974  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504536  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504676  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504750  165698 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:01:50.504801  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.504861  165698 ssh_runner.go:195] Run: cat /version.json
	I0617 12:01:50.504890  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.507577  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.507736  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508013  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.508041  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508176  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.508200  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508205  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.508335  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.508419  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.508499  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.508580  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.508691  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.508717  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.508830  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.585030  165698 ssh_runner.go:195] Run: systemctl --version
	I0617 12:01:50.612492  165698 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:01:50.765842  165698 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:01:50.773214  165698 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:01:50.773288  165698 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:01:50.793397  165698 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:01:50.793424  165698 start.go:494] detecting cgroup driver to use...
	I0617 12:01:50.793499  165698 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:01:50.811531  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:01:50.826223  165698 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:01:50.826289  165698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:01:50.840517  165698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:01:50.854788  165698 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:01:50.970328  165698 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:01:51.125815  165698 docker.go:233] disabling docker service ...
	I0617 12:01:51.125893  165698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:01:51.146368  165698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:01:51.161459  165698 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:01:51.346032  165698 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:01:51.503395  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:01:51.521021  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:01:51.543851  165698 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0617 12:01:51.543905  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.556230  165698 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:01:51.556309  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.573061  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.588663  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.601086  165698 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:01:51.617347  165698 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:01:51.634502  165698 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:01:51.634635  165698 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:01:51.652813  165698 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:01:51.665145  165698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:51.826713  165698 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:01:51.981094  165698 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:01:51.981186  165698 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:01:51.986026  165698 start.go:562] Will wait 60s for crictl version
	I0617 12:01:51.986091  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:51.990253  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:01:52.032543  165698 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:01:52.032631  165698 ssh_runner.go:195] Run: crio --version
	I0617 12:01:52.063904  165698 ssh_runner.go:195] Run: crio --version
	I0617 12:01:52.097158  165698 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0617 12:01:52.098675  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:52.102187  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:52.102572  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:52.102603  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:52.102823  165698 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0617 12:01:52.107573  165698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:52.121312  165698 kubeadm.go:877] updating cluster {Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.164 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:01:52.121448  165698 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0617 12:01:52.121515  165698 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:52.181796  165698 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0617 12:01:52.181891  165698 ssh_runner.go:195] Run: which lz4
	I0617 12:01:52.186827  165698 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0617 12:01:52.191806  165698 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:01:52.191875  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0617 12:01:54.026903  165698 crio.go:462] duration metric: took 1.840117639s to copy over tarball
	I0617 12:01:54.027003  165698 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:01:57.049870  165698 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.022814584s)
	I0617 12:01:57.049904  165698 crio.go:469] duration metric: took 3.022967677s to extract the tarball
	I0617 12:01:57.049914  165698 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:01:57.094589  165698 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:57.133299  165698 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0617 12:01:57.133331  165698 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.133451  165698 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.133456  165698 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.133477  165698 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.133530  165698 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.133626  165698 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.135990  165698 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.135994  165698 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.135985  165698 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0617 12:01:57.136041  165698 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.136041  165698 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.289271  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.299061  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.322581  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.336462  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.337619  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.350335  165698 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0617 12:01:57.350395  165698 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.350448  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.357972  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0617 12:01:57.391517  165698 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0617 12:01:57.391563  165698 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.391640  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.419438  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.442111  165698 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0617 12:01:57.442154  165698 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.442200  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.450145  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.485873  165698 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0617 12:01:57.485922  165698 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0617 12:01:57.485942  165698 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.485957  165698 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.485996  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.486003  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.486053  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.490584  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.490669  165698 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0617 12:01:57.490714  165698 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0617 12:01:57.490755  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.551564  165698 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0617 12:01:57.551597  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.551619  165698 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.551662  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.660683  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0617 12:01:57.660732  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.660799  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0617 12:01:57.660856  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0617 12:01:57.660734  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.660903  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0617 12:01:57.660930  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.753965  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0617 12:01:57.753981  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0617 12:01:57.754069  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0617 12:01:57.754069  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0617 12:01:57.754146  165698 cache_images.go:92] duration metric: took 620.797178ms to LoadCachedImages
	W0617 12:01:57.754271  165698 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0617 12:01:57.754292  165698 kubeadm.go:928] updating node { 192.168.61.164 8443 v1.20.0 crio true true} ...
	I0617 12:01:57.754415  165698 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-003661 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:01:57.754489  165698 ssh_runner.go:195] Run: crio config
	I0617 12:01:57.807120  165698 cni.go:84] Creating CNI manager for ""
	I0617 12:01:57.807144  165698 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:57.807158  165698 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:01:57.807182  165698 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.164 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-003661 NodeName:old-k8s-version-003661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0617 12:01:57.807370  165698 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-003661"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:01:57.807437  165698 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0617 12:01:57.817865  165698 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:01:57.817940  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:01:57.829796  165698 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0617 12:01:57.847758  165698 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:01:57.866182  165698 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0617 12:01:57.884500  165698 ssh_runner.go:195] Run: grep 192.168.61.164	control-plane.minikube.internal$ /etc/hosts
	I0617 12:01:57.888852  165698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:57.902176  165698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:58.049361  165698 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:58.067893  165698 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661 for IP: 192.168.61.164
	I0617 12:01:58.067924  165698 certs.go:194] generating shared ca certs ...
	I0617 12:01:58.067945  165698 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:58.068162  165698 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:01:58.068221  165698 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:01:58.068236  165698 certs.go:256] generating profile certs ...
	I0617 12:01:58.068352  165698 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.key
	I0617 12:01:58.068438  165698 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key.6c1f259c
	I0617 12:01:58.068493  165698 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.key
	I0617 12:01:58.068647  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:01:58.068690  165698 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:01:58.068704  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:01:58.068743  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:01:58.068790  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:01:58.068824  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:01:58.068877  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:58.069548  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:01:58.109048  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:01:58.134825  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:01:58.159910  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:01:58.191108  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0617 12:01:58.217407  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:01:58.242626  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:01:58.267261  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0617 12:01:58.291562  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:01:58.321848  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:01:58.352361  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:01:58.379343  165698 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:01:58.399146  165698 ssh_runner.go:195] Run: openssl version
	I0617 12:01:58.405081  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:01:58.415471  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.420046  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.420099  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.425886  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:01:58.436575  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:01:58.447166  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.451523  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.451582  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.457670  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:01:58.468667  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:01:58.479095  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.483744  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.483796  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.489520  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:01:58.500298  165698 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:01:58.504859  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:01:58.510619  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:01:58.516819  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:01:58.522837  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:01:58.528736  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:01:58.534585  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:01:58.540464  165698 kubeadm.go:391] StartCluster: {Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.164 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:01:58.540549  165698 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:01:58.540624  165698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:58.583638  165698 cri.go:89] found id: ""
	I0617 12:01:58.583724  165698 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:01:58.594266  165698 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:01:58.594290  165698 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:01:58.594295  165698 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:01:58.594354  165698 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:01:58.604415  165698 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:01:58.605367  165698 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-003661" does not appear in /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:01:58.605949  165698 kubeconfig.go:62] /home/jenkins/minikube-integration/19084-112967/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-003661" cluster setting kubeconfig missing "old-k8s-version-003661" context setting]
	I0617 12:01:58.606833  165698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:58.662621  165698 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:01:58.673813  165698 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.164
	I0617 12:01:58.673848  165698 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:01:58.673863  165698 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:01:58.673907  165698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:58.712607  165698 cri.go:89] found id: ""
	I0617 12:01:58.712703  165698 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:01:58.731676  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:01:58.741645  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:01:58.741666  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:01:58.741709  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:01:58.750871  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:01:58.750931  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:01:58.760545  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:01:58.769701  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:01:58.769776  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:01:58.779348  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:01:58.788507  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:01:58.788566  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:01:58.799220  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:01:58.808403  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:01:58.808468  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:01:58.818169  165698 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:01:58.828079  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:58.962164  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:59.679319  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:59.903216  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:00.026243  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:00.126201  165698 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:00.126314  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:00.627227  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:01.126836  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:01.626524  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:02.126619  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:02.626434  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:03.126687  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:03.626469  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:04.126347  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:04.626548  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:05.127142  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:05.626937  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:06.126479  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:06.626466  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:07.126806  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:07.626814  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:08.127233  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:08.626498  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:09.126712  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:09.627284  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.126446  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.627249  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:11.126428  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:11.626638  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:12.127091  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:12.627361  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:13.126836  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:13.626460  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.127261  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.627161  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:15.126580  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:15.627082  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:16.127163  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:16.626524  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:17.126469  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:17.626488  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:18.126897  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:18.627145  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.126724  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.626498  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:20.126389  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:20.627190  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:21.126480  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:21.627210  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:22.127273  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:22.626691  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:23.126888  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:23.627274  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.127019  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.627337  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:25.126642  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:25.627064  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:26.126606  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:26.626803  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:27.126825  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:27.626799  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:28.126854  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:28.627278  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:29.126577  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:29.626475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:30.127193  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:30.627229  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:31.126478  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:31.626336  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:32.126398  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:32.627005  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:33.126753  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:33.627017  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:34.126558  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:34.626976  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.126410  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.627309  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:36.126958  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:36.626349  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:37.126815  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:37.627332  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:38.126868  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:38.627367  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.127148  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.626571  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:40.126379  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:40.626747  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:41.126485  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:41.626372  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:42.126904  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:42.627293  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:43.127137  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:43.626521  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.127017  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.626824  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:45.126475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:45.626535  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:46.127423  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:46.626605  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:47.127029  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:47.627431  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:48.127215  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:48.627013  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:49.126439  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:49.626831  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.126521  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.627178  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:51.126830  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:51.627091  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:52.127343  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:52.626635  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:53.126693  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:53.627110  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:54.126653  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:54.626424  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:55.127113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:55.627373  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:56.126415  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:56.627329  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:57.126797  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:57.627313  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:58.126744  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:58.627050  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:59.127300  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:59.626694  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:00.127092  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:00.127182  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:00.166116  165698 cri.go:89] found id: ""
	I0617 12:03:00.166145  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.166153  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:00.166159  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:00.166208  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:00.200990  165698 cri.go:89] found id: ""
	I0617 12:03:00.201020  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.201029  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:00.201034  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:00.201086  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:00.236394  165698 cri.go:89] found id: ""
	I0617 12:03:00.236422  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.236430  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:00.236438  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:00.236496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:00.274257  165698 cri.go:89] found id: ""
	I0617 12:03:00.274285  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.274293  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:00.274299  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:00.274350  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:00.307425  165698 cri.go:89] found id: ""
	I0617 12:03:00.307452  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.307481  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:00.307490  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:00.307557  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:00.343420  165698 cri.go:89] found id: ""
	I0617 12:03:00.343446  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.343472  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:00.343480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:00.343541  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:00.378301  165698 cri.go:89] found id: ""
	I0617 12:03:00.378325  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.378333  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:00.378338  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:00.378383  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:00.414985  165698 cri.go:89] found id: ""
	I0617 12:03:00.415011  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.415018  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:00.415033  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:00.415090  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:00.468230  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:00.468262  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:00.481970  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:00.481998  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:00.612881  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:00.612911  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:00.612929  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:00.676110  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:00.676145  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:03.216960  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:03.231208  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:03.231277  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:03.267056  165698 cri.go:89] found id: ""
	I0617 12:03:03.267088  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.267096  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:03.267103  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:03.267152  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:03.302797  165698 cri.go:89] found id: ""
	I0617 12:03:03.302832  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.302844  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:03.302852  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:03.302905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:03.343401  165698 cri.go:89] found id: ""
	I0617 12:03:03.343435  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.343445  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:03.343465  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:03.343530  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:03.380841  165698 cri.go:89] found id: ""
	I0617 12:03:03.380871  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.380883  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:03.380890  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:03.380951  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:03.420098  165698 cri.go:89] found id: ""
	I0617 12:03:03.420130  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.420142  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:03.420150  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:03.420213  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:03.458476  165698 cri.go:89] found id: ""
	I0617 12:03:03.458506  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.458515  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:03.458521  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:03.458586  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:03.497127  165698 cri.go:89] found id: ""
	I0617 12:03:03.497156  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.497164  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:03.497170  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:03.497217  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:03.538759  165698 cri.go:89] found id: ""
	I0617 12:03:03.538794  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.538806  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:03.538825  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:03.538841  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:03.584701  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:03.584743  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:03.636981  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:03.637030  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:03.670032  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:03.670077  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:03.757012  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:03.757038  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:03.757056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:06.327680  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:06.341998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:06.342068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:06.383353  165698 cri.go:89] found id: ""
	I0617 12:03:06.383385  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.383394  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:06.383400  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:06.383448  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:06.418806  165698 cri.go:89] found id: ""
	I0617 12:03:06.418850  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.418862  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:06.418870  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:06.418945  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:06.458151  165698 cri.go:89] found id: ""
	I0617 12:03:06.458192  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.458204  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:06.458219  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:06.458289  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:06.496607  165698 cri.go:89] found id: ""
	I0617 12:03:06.496637  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.496645  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:06.496651  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:06.496703  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:06.534900  165698 cri.go:89] found id: ""
	I0617 12:03:06.534938  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.534951  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:06.534959  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:06.535017  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:06.572388  165698 cri.go:89] found id: ""
	I0617 12:03:06.572413  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.572422  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:06.572428  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:06.572496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:06.608072  165698 cri.go:89] found id: ""
	I0617 12:03:06.608104  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.608115  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:06.608121  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:06.608175  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:06.647727  165698 cri.go:89] found id: ""
	I0617 12:03:06.647760  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.647772  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:06.647784  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:06.647800  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:06.720887  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:06.720919  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:06.761128  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:06.761153  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:06.815524  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:06.815557  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:06.830275  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:06.830304  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:06.907861  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:09.408117  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:09.420916  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:09.420978  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:09.453830  165698 cri.go:89] found id: ""
	I0617 12:03:09.453860  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.453870  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:09.453878  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:09.453937  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:09.492721  165698 cri.go:89] found id: ""
	I0617 12:03:09.492756  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.492766  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:09.492775  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:09.492849  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:09.530956  165698 cri.go:89] found id: ""
	I0617 12:03:09.530984  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.530995  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:09.531001  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:09.531067  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:09.571534  165698 cri.go:89] found id: ""
	I0617 12:03:09.571564  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.571576  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:09.571584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:09.571646  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:09.609740  165698 cri.go:89] found id: ""
	I0617 12:03:09.609776  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.609788  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:09.609797  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:09.609864  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:09.649958  165698 cri.go:89] found id: ""
	I0617 12:03:09.649998  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.650010  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:09.650020  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:09.650087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:09.706495  165698 cri.go:89] found id: ""
	I0617 12:03:09.706532  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.706544  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:09.706553  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:09.706638  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:09.742513  165698 cri.go:89] found id: ""
	I0617 12:03:09.742541  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.742549  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:09.742559  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:09.742571  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:09.756470  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:09.756502  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:09.840878  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:09.840897  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:09.840913  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:09.922329  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:09.922370  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:09.967536  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:09.967573  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:12.521031  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:12.534507  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:12.534595  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:12.569895  165698 cri.go:89] found id: ""
	I0617 12:03:12.569930  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.569942  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:12.569950  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:12.570005  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:12.606857  165698 cri.go:89] found id: ""
	I0617 12:03:12.606888  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.606900  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:12.606922  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:12.606998  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:12.640781  165698 cri.go:89] found id: ""
	I0617 12:03:12.640807  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.640818  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:12.640826  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:12.640910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:12.674097  165698 cri.go:89] found id: ""
	I0617 12:03:12.674124  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.674134  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:12.674142  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:12.674201  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:12.708662  165698 cri.go:89] found id: ""
	I0617 12:03:12.708689  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.708699  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:12.708707  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:12.708791  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:12.744891  165698 cri.go:89] found id: ""
	I0617 12:03:12.744927  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.744938  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:12.744947  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:12.745010  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:12.778440  165698 cri.go:89] found id: ""
	I0617 12:03:12.778466  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.778474  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:12.778480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:12.778528  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:12.814733  165698 cri.go:89] found id: ""
	I0617 12:03:12.814762  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.814770  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:12.814780  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:12.814820  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:12.887741  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:12.887762  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:12.887775  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:12.968439  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:12.968476  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:13.008926  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:13.008955  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:13.060432  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:13.060468  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:15.575450  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:15.589178  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:15.589244  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:15.625554  165698 cri.go:89] found id: ""
	I0617 12:03:15.625589  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.625601  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:15.625608  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:15.625668  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:15.659023  165698 cri.go:89] found id: ""
	I0617 12:03:15.659054  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.659066  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:15.659074  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:15.659138  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:15.693777  165698 cri.go:89] found id: ""
	I0617 12:03:15.693803  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.693811  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:15.693817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:15.693875  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:15.729098  165698 cri.go:89] found id: ""
	I0617 12:03:15.729133  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.729141  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:15.729147  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:15.729194  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:15.762639  165698 cri.go:89] found id: ""
	I0617 12:03:15.762668  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.762679  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:15.762687  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:15.762744  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:15.797446  165698 cri.go:89] found id: ""
	I0617 12:03:15.797475  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.797484  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:15.797489  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:15.797537  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:15.832464  165698 cri.go:89] found id: ""
	I0617 12:03:15.832503  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.832513  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:15.832521  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:15.832579  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:15.867868  165698 cri.go:89] found id: ""
	I0617 12:03:15.867898  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.867906  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:15.867916  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:15.867928  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:15.882151  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:15.882181  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:15.946642  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:15.946666  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:15.946682  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:16.027062  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:16.027098  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:16.082704  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:16.082735  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:18.651554  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:18.665096  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:18.665166  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:18.703099  165698 cri.go:89] found id: ""
	I0617 12:03:18.703127  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.703138  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:18.703147  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:18.703210  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:18.737945  165698 cri.go:89] found id: ""
	I0617 12:03:18.737985  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.737997  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:18.738005  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:18.738079  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:18.777145  165698 cri.go:89] found id: ""
	I0617 12:03:18.777172  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.777181  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:18.777187  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:18.777255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:18.813171  165698 cri.go:89] found id: ""
	I0617 12:03:18.813198  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.813207  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:18.813213  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:18.813270  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:18.854459  165698 cri.go:89] found id: ""
	I0617 12:03:18.854490  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.854501  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:18.854510  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:18.854607  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:18.893668  165698 cri.go:89] found id: ""
	I0617 12:03:18.893703  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.893712  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:18.893718  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:18.893796  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:18.928919  165698 cri.go:89] found id: ""
	I0617 12:03:18.928971  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.928983  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:18.928993  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:18.929068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:18.965770  165698 cri.go:89] found id: ""
	I0617 12:03:18.965800  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.965808  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:18.965817  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:18.965829  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:19.020348  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:19.020392  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:19.034815  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:19.034845  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:19.109617  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:19.109643  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:19.109660  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:19.186843  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:19.186890  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:21.732720  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:21.747032  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:21.747113  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:21.789962  165698 cri.go:89] found id: ""
	I0617 12:03:21.789991  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.789999  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:21.790011  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:21.790066  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:21.833865  165698 cri.go:89] found id: ""
	I0617 12:03:21.833903  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.833913  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:21.833921  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:21.833985  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:21.903891  165698 cri.go:89] found id: ""
	I0617 12:03:21.903929  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.903941  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:21.903950  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:21.904020  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:21.941369  165698 cri.go:89] found id: ""
	I0617 12:03:21.941396  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.941407  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:21.941415  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:21.941473  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:21.977767  165698 cri.go:89] found id: ""
	I0617 12:03:21.977797  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.977808  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:21.977817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:21.977880  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:22.016422  165698 cri.go:89] found id: ""
	I0617 12:03:22.016450  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.016463  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:22.016471  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:22.016536  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:22.056871  165698 cri.go:89] found id: ""
	I0617 12:03:22.056904  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.056914  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:22.056922  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:22.056982  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:22.093244  165698 cri.go:89] found id: ""
	I0617 12:03:22.093288  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.093300  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:22.093313  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:22.093331  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:22.144722  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:22.144756  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:22.159047  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:22.159084  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:22.232077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:22.232100  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:22.232112  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:22.308241  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:22.308276  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:24.851740  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:24.866597  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:24.866659  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:24.902847  165698 cri.go:89] found id: ""
	I0617 12:03:24.902879  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.902892  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:24.902900  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:24.902973  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:24.940042  165698 cri.go:89] found id: ""
	I0617 12:03:24.940079  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.940088  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:24.940094  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:24.940150  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:24.975160  165698 cri.go:89] found id: ""
	I0617 12:03:24.975190  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.975202  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:24.975211  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:24.975280  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:25.012618  165698 cri.go:89] found id: ""
	I0617 12:03:25.012649  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.012657  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:25.012663  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:25.012712  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:25.051166  165698 cri.go:89] found id: ""
	I0617 12:03:25.051210  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.051223  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:25.051230  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:25.051309  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:25.090112  165698 cri.go:89] found id: ""
	I0617 12:03:25.090144  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.090156  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:25.090164  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:25.090230  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:25.133258  165698 cri.go:89] found id: ""
	I0617 12:03:25.133285  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.133294  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:25.133301  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:25.133366  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:25.177445  165698 cri.go:89] found id: ""
	I0617 12:03:25.177473  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.177481  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:25.177490  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:25.177505  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:25.250685  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:25.250710  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:25.250727  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:25.335554  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:25.335586  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:25.377058  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:25.377093  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:25.431425  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:25.431471  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:27.945063  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:27.959396  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:27.959469  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:27.994554  165698 cri.go:89] found id: ""
	I0617 12:03:27.994582  165698 logs.go:276] 0 containers: []
	W0617 12:03:27.994591  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:27.994598  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:27.994660  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:28.030168  165698 cri.go:89] found id: ""
	I0617 12:03:28.030200  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.030208  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:28.030215  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:28.030263  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:28.066213  165698 cri.go:89] found id: ""
	I0617 12:03:28.066244  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.066255  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:28.066261  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:28.066322  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:28.102855  165698 cri.go:89] found id: ""
	I0617 12:03:28.102880  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.102888  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:28.102894  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:28.102942  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:28.138698  165698 cri.go:89] found id: ""
	I0617 12:03:28.138734  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.138748  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:28.138755  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:28.138815  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:28.173114  165698 cri.go:89] found id: ""
	I0617 12:03:28.173140  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.173148  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:28.173154  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:28.173213  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:28.208901  165698 cri.go:89] found id: ""
	I0617 12:03:28.208936  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.208947  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:28.208955  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:28.209016  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:28.244634  165698 cri.go:89] found id: ""
	I0617 12:03:28.244667  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.244678  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:28.244687  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:28.244699  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:28.300303  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:28.300336  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:28.314227  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:28.314272  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:28.394322  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:28.394350  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:28.394367  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:28.483381  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:28.483413  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:31.026433  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:31.040820  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:31.040888  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:31.086409  165698 cri.go:89] found id: ""
	I0617 12:03:31.086440  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.086453  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:31.086461  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:31.086548  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:31.122810  165698 cri.go:89] found id: ""
	I0617 12:03:31.122836  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.122843  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:31.122849  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:31.122910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:31.157634  165698 cri.go:89] found id: ""
	I0617 12:03:31.157669  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.157680  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:31.157687  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:31.157750  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:31.191498  165698 cri.go:89] found id: ""
	I0617 12:03:31.191529  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.191541  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:31.191549  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:31.191619  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:31.225575  165698 cri.go:89] found id: ""
	I0617 12:03:31.225599  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.225609  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:31.225616  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:31.225670  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:31.269780  165698 cri.go:89] found id: ""
	I0617 12:03:31.269810  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.269819  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:31.269825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:31.269874  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:31.307689  165698 cri.go:89] found id: ""
	I0617 12:03:31.307717  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.307726  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:31.307733  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:31.307789  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:31.344160  165698 cri.go:89] found id: ""
	I0617 12:03:31.344190  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.344200  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:31.344210  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:31.344223  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:31.397627  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:31.397667  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:31.411316  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:31.411347  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:31.486258  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:31.486280  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:31.486297  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:31.568067  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:31.568106  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:34.111424  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:34.127178  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:34.127255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:34.165900  165698 cri.go:89] found id: ""
	I0617 12:03:34.165936  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.165947  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:34.165955  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:34.166042  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:34.203556  165698 cri.go:89] found id: ""
	I0617 12:03:34.203588  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.203597  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:34.203606  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:34.203659  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:34.243418  165698 cri.go:89] found id: ""
	I0617 12:03:34.243478  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.243490  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:34.243499  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:34.243661  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:34.281542  165698 cri.go:89] found id: ""
	I0617 12:03:34.281569  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.281577  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:34.281582  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:34.281635  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:34.316304  165698 cri.go:89] found id: ""
	I0617 12:03:34.316333  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.316341  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:34.316347  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:34.316403  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:34.357416  165698 cri.go:89] found id: ""
	I0617 12:03:34.357455  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.357467  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:34.357476  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:34.357547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:34.392069  165698 cri.go:89] found id: ""
	I0617 12:03:34.392101  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.392112  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:34.392120  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:34.392185  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:34.427203  165698 cri.go:89] found id: ""
	I0617 12:03:34.427235  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.427247  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:34.427258  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:34.427317  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:34.441346  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:34.441375  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:34.519306  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:34.519331  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:34.519349  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:34.598802  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:34.598843  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:34.637521  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:34.637554  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:37.191259  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:37.205882  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:37.205947  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:37.242175  165698 cri.go:89] found id: ""
	I0617 12:03:37.242202  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.242209  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:37.242215  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:37.242278  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:37.278004  165698 cri.go:89] found id: ""
	I0617 12:03:37.278029  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.278037  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:37.278043  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:37.278091  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:37.322148  165698 cri.go:89] found id: ""
	I0617 12:03:37.322179  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.322190  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:37.322198  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:37.322259  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:37.358612  165698 cri.go:89] found id: ""
	I0617 12:03:37.358638  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.358649  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:37.358657  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:37.358718  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:37.393070  165698 cri.go:89] found id: ""
	I0617 12:03:37.393104  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.393115  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:37.393123  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:37.393187  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:37.429420  165698 cri.go:89] found id: ""
	I0617 12:03:37.429452  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.429465  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:37.429475  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:37.429541  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:37.464485  165698 cri.go:89] found id: ""
	I0617 12:03:37.464509  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.464518  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:37.464523  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:37.464584  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:37.501283  165698 cri.go:89] found id: ""
	I0617 12:03:37.501308  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.501316  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:37.501326  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:37.501338  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:37.552848  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:37.552889  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:37.566715  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:37.566746  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:37.643560  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:37.643584  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:37.643601  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:37.722895  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:37.722935  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:40.268199  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:40.281832  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:40.281905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:40.317094  165698 cri.go:89] found id: ""
	I0617 12:03:40.317137  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.317150  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:40.317159  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:40.317229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:40.355786  165698 cri.go:89] found id: ""
	I0617 12:03:40.355819  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.355829  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:40.355836  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:40.355903  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:40.394282  165698 cri.go:89] found id: ""
	I0617 12:03:40.394312  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.394323  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:40.394332  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:40.394388  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:40.433773  165698 cri.go:89] found id: ""
	I0617 12:03:40.433806  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.433817  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:40.433825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:40.433875  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:40.469937  165698 cri.go:89] found id: ""
	I0617 12:03:40.469973  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.469985  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:40.469998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:40.470067  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:40.503565  165698 cri.go:89] found id: ""
	I0617 12:03:40.503590  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.503599  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:40.503605  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:40.503654  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:40.538349  165698 cri.go:89] found id: ""
	I0617 12:03:40.538383  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.538394  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:40.538402  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:40.538461  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:40.576036  165698 cri.go:89] found id: ""
	I0617 12:03:40.576066  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.576075  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:40.576085  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:40.576100  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:40.617804  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:40.617833  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:40.668126  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:40.668162  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:40.682618  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:40.682655  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:40.759597  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:40.759619  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:40.759638  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:43.343404  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:43.357886  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:43.357953  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:43.398262  165698 cri.go:89] found id: ""
	I0617 12:03:43.398290  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.398301  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:43.398310  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:43.398370  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:43.432241  165698 cri.go:89] found id: ""
	I0617 12:03:43.432272  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.432280  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:43.432289  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:43.432348  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:43.466210  165698 cri.go:89] found id: ""
	I0617 12:03:43.466234  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.466241  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:43.466247  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:43.466294  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:43.501677  165698 cri.go:89] found id: ""
	I0617 12:03:43.501711  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.501723  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:43.501731  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:43.501793  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:43.541826  165698 cri.go:89] found id: ""
	I0617 12:03:43.541860  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.541870  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:43.541876  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:43.541941  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:43.576940  165698 cri.go:89] found id: ""
	I0617 12:03:43.576962  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.576970  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:43.576975  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:43.577022  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:43.612592  165698 cri.go:89] found id: ""
	I0617 12:03:43.612627  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.612635  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:43.612643  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:43.612694  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:43.647141  165698 cri.go:89] found id: ""
	I0617 12:03:43.647176  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.647188  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:43.647202  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:43.647220  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:43.698248  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:43.698283  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:43.711686  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:43.711714  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:43.787077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:43.787101  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:43.787115  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:43.861417  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:43.861455  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:46.402594  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:46.417108  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:46.417185  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:46.453910  165698 cri.go:89] found id: ""
	I0617 12:03:46.453941  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.453952  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:46.453960  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:46.454020  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:46.487239  165698 cri.go:89] found id: ""
	I0617 12:03:46.487268  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.487280  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:46.487288  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:46.487353  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:46.521824  165698 cri.go:89] found id: ""
	I0617 12:03:46.521850  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.521859  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:46.521866  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:46.521929  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:46.557247  165698 cri.go:89] found id: ""
	I0617 12:03:46.557274  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.557282  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:46.557289  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:46.557350  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:46.600354  165698 cri.go:89] found id: ""
	I0617 12:03:46.600383  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.600393  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:46.600402  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:46.600477  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:46.638153  165698 cri.go:89] found id: ""
	I0617 12:03:46.638180  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.638189  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:46.638197  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:46.638255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:46.672636  165698 cri.go:89] found id: ""
	I0617 12:03:46.672661  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.672669  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:46.672675  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:46.672721  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:46.706431  165698 cri.go:89] found id: ""
	I0617 12:03:46.706468  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.706481  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:46.706493  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:46.706509  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:46.720796  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:46.720842  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:46.801343  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:46.801365  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:46.801379  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:46.883651  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:46.883696  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:46.928594  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:46.928630  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:49.480413  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:49.495558  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:49.495656  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:49.533281  165698 cri.go:89] found id: ""
	I0617 12:03:49.533313  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.533323  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:49.533330  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:49.533396  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:49.573430  165698 cri.go:89] found id: ""
	I0617 12:03:49.573457  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.573465  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:49.573472  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:49.573532  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:49.608669  165698 cri.go:89] found id: ""
	I0617 12:03:49.608697  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.608705  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:49.608711  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:49.608767  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:49.643411  165698 cri.go:89] found id: ""
	I0617 12:03:49.643449  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.643481  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:49.643490  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:49.643557  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:49.680039  165698 cri.go:89] found id: ""
	I0617 12:03:49.680071  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.680082  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:49.680090  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:49.680148  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:49.717169  165698 cri.go:89] found id: ""
	I0617 12:03:49.717195  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.717203  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:49.717209  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:49.717262  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:49.754585  165698 cri.go:89] found id: ""
	I0617 12:03:49.754615  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.754625  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:49.754633  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:49.754697  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:49.796040  165698 cri.go:89] found id: ""
	I0617 12:03:49.796074  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.796085  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:49.796097  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:49.796112  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:49.873496  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:49.873530  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:49.873547  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:49.961883  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:49.961925  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:50.002975  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:50.003004  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:50.054185  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:50.054224  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:52.568557  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:52.584264  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:52.584337  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:52.622474  165698 cri.go:89] found id: ""
	I0617 12:03:52.622501  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.622509  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:52.622516  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:52.622566  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:52.661012  165698 cri.go:89] found id: ""
	I0617 12:03:52.661045  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.661057  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:52.661066  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:52.661133  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:52.700950  165698 cri.go:89] found id: ""
	I0617 12:03:52.700986  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.700998  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:52.701006  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:52.701075  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:52.735663  165698 cri.go:89] found id: ""
	I0617 12:03:52.735689  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.735696  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:52.735702  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:52.735768  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:52.776540  165698 cri.go:89] found id: ""
	I0617 12:03:52.776568  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.776580  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:52.776589  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:52.776642  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:52.812439  165698 cri.go:89] found id: ""
	I0617 12:03:52.812474  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.812493  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:52.812503  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:52.812567  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:52.849233  165698 cri.go:89] found id: ""
	I0617 12:03:52.849263  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.849273  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:52.849281  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:52.849343  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:52.885365  165698 cri.go:89] found id: ""
	I0617 12:03:52.885395  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.885406  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:52.885419  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:52.885434  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:52.941521  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:52.941553  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:52.955958  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:52.955997  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:53.029254  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:53.029278  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:53.029291  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:53.104391  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:53.104425  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:55.648578  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:55.662143  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:55.662205  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:55.697623  165698 cri.go:89] found id: ""
	I0617 12:03:55.697662  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.697674  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:55.697682  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:55.697751  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:55.734132  165698 cri.go:89] found id: ""
	I0617 12:03:55.734171  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.734184  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:55.734192  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:55.734265  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:55.774178  165698 cri.go:89] found id: ""
	I0617 12:03:55.774212  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.774222  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:55.774231  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:55.774296  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:55.816427  165698 cri.go:89] found id: ""
	I0617 12:03:55.816460  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.816471  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:55.816480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:55.816546  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:55.860413  165698 cri.go:89] found id: ""
	I0617 12:03:55.860446  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.860457  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:55.860465  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:55.860532  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:55.897577  165698 cri.go:89] found id: ""
	I0617 12:03:55.897612  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.897622  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:55.897629  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:55.897682  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:55.934163  165698 cri.go:89] found id: ""
	I0617 12:03:55.934200  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.934212  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:55.934220  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:55.934291  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:55.972781  165698 cri.go:89] found id: ""
	I0617 12:03:55.972827  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.972840  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:55.972852  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:55.972867  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:56.027292  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:56.027332  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:56.042304  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:56.042336  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:56.115129  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:56.115159  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:56.115176  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:56.194161  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:56.194200  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:58.734681  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:58.748467  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:58.748534  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:58.786191  165698 cri.go:89] found id: ""
	I0617 12:03:58.786221  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.786232  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:58.786239  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:58.786302  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:58.822076  165698 cri.go:89] found id: ""
	I0617 12:03:58.822103  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.822125  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:58.822134  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:58.822199  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:58.858830  165698 cri.go:89] found id: ""
	I0617 12:03:58.858859  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.858867  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:58.858873  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:58.858927  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:58.898802  165698 cri.go:89] found id: ""
	I0617 12:03:58.898830  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.898838  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:58.898844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:58.898891  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:58.933234  165698 cri.go:89] found id: ""
	I0617 12:03:58.933269  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.933281  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:58.933289  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:58.933355  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:58.973719  165698 cri.go:89] found id: ""
	I0617 12:03:58.973753  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.973766  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:58.973773  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:58.973847  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:59.010671  165698 cri.go:89] found id: ""
	I0617 12:03:59.010722  165698 logs.go:276] 0 containers: []
	W0617 12:03:59.010734  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:59.010741  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:59.010805  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:59.047318  165698 cri.go:89] found id: ""
	I0617 12:03:59.047347  165698 logs.go:276] 0 containers: []
	W0617 12:03:59.047359  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:59.047372  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:59.047389  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:59.097778  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:59.097815  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:59.111615  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:59.111646  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:59.193172  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:59.193195  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:59.193207  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:59.268147  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:59.268182  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:01.807585  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:01.821634  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:01.821694  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:01.857610  165698 cri.go:89] found id: ""
	I0617 12:04:01.857637  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.857647  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:01.857654  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:01.857710  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:01.893229  165698 cri.go:89] found id: ""
	I0617 12:04:01.893253  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.893261  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:01.893267  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:01.893324  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:01.926916  165698 cri.go:89] found id: ""
	I0617 12:04:01.926940  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.926950  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:01.926958  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:01.927017  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:01.961913  165698 cri.go:89] found id: ""
	I0617 12:04:01.961946  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.961957  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:01.961967  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:01.962045  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:01.997084  165698 cri.go:89] found id: ""
	I0617 12:04:01.997111  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.997119  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:01.997125  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:01.997173  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:02.034640  165698 cri.go:89] found id: ""
	I0617 12:04:02.034666  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.034674  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:02.034680  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:02.034744  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:02.085868  165698 cri.go:89] found id: ""
	I0617 12:04:02.085910  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.085920  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:02.085928  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:02.085983  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:02.152460  165698 cri.go:89] found id: ""
	I0617 12:04:02.152487  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.152499  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:02.152513  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:02.152528  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:02.205297  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:02.205344  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:02.222312  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:02.222348  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:02.299934  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:02.299959  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:02.299977  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:02.384008  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:02.384056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:04.926889  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:04.940643  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:04.940722  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:04.976246  165698 cri.go:89] found id: ""
	I0617 12:04:04.976275  165698 logs.go:276] 0 containers: []
	W0617 12:04:04.976283  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:04.976289  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:04.976338  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:05.015864  165698 cri.go:89] found id: ""
	I0617 12:04:05.015900  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.015913  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:05.015921  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:05.015985  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:05.054051  165698 cri.go:89] found id: ""
	I0617 12:04:05.054086  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.054099  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:05.054112  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:05.054177  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:05.090320  165698 cri.go:89] found id: ""
	I0617 12:04:05.090358  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.090371  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:05.090380  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:05.090438  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:05.126963  165698 cri.go:89] found id: ""
	I0617 12:04:05.126998  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.127008  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:05.127015  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:05.127087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:05.162565  165698 cri.go:89] found id: ""
	I0617 12:04:05.162600  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.162611  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:05.162620  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:05.162674  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:05.195706  165698 cri.go:89] found id: ""
	I0617 12:04:05.195743  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.195752  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:05.195758  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:05.195826  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:05.236961  165698 cri.go:89] found id: ""
	I0617 12:04:05.236995  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.237006  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:05.237016  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:05.237034  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:05.252754  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:05.252783  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:05.327832  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:05.327870  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:05.327886  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:05.410220  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:05.410271  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:05.451291  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:05.451324  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:08.003058  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:08.016611  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:08.016670  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:08.052947  165698 cri.go:89] found id: ""
	I0617 12:04:08.052984  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.052996  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:08.053004  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:08.053057  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:08.086668  165698 cri.go:89] found id: ""
	I0617 12:04:08.086695  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.086704  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:08.086711  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:08.086773  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:08.127708  165698 cri.go:89] found id: ""
	I0617 12:04:08.127738  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.127746  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:08.127752  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:08.127814  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:08.162930  165698 cri.go:89] found id: ""
	I0617 12:04:08.162959  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.162966  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:08.162973  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:08.163026  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:08.196757  165698 cri.go:89] found id: ""
	I0617 12:04:08.196782  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.196791  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:08.196797  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:08.196851  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:08.229976  165698 cri.go:89] found id: ""
	I0617 12:04:08.230006  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.230016  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:08.230022  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:08.230083  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:08.265969  165698 cri.go:89] found id: ""
	I0617 12:04:08.266000  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.266007  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:08.266013  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:08.266071  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:08.299690  165698 cri.go:89] found id: ""
	I0617 12:04:08.299717  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.299728  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:08.299741  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:08.299761  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:08.353399  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:08.353429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:08.366713  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:08.366739  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:08.442727  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:08.442768  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:08.442786  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:08.527832  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:08.527875  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:11.073616  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:11.087085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:11.087172  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:11.121706  165698 cri.go:89] found id: ""
	I0617 12:04:11.121745  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.121756  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:11.121765  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:11.121839  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:11.157601  165698 cri.go:89] found id: ""
	I0617 12:04:11.157637  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.157648  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:11.157657  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:11.157719  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:11.191929  165698 cri.go:89] found id: ""
	I0617 12:04:11.191963  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.191975  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:11.191983  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:11.192045  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:11.228391  165698 cri.go:89] found id: ""
	I0617 12:04:11.228416  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.228429  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:11.228437  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:11.228497  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:11.261880  165698 cri.go:89] found id: ""
	I0617 12:04:11.261911  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.261924  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:11.261932  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:11.261998  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:11.294615  165698 cri.go:89] found id: ""
	I0617 12:04:11.294663  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.294676  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:11.294684  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:11.294745  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:11.332813  165698 cri.go:89] found id: ""
	I0617 12:04:11.332840  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.332847  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:11.332854  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:11.332911  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:11.369032  165698 cri.go:89] found id: ""
	I0617 12:04:11.369060  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.369068  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:11.369078  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:11.369090  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:11.422522  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:11.422555  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:11.436961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:11.436990  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:11.508679  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:11.508700  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:11.508713  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:11.586574  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:11.586610  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:14.127034  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:14.143228  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:14.143306  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:14.178368  165698 cri.go:89] found id: ""
	I0617 12:04:14.178396  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.178405  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:14.178410  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:14.178459  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:14.209971  165698 cri.go:89] found id: ""
	I0617 12:04:14.210001  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.210010  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:14.210015  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:14.210065  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:14.244888  165698 cri.go:89] found id: ""
	I0617 12:04:14.244922  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.244933  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:14.244940  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:14.244999  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:14.277875  165698 cri.go:89] found id: ""
	I0617 12:04:14.277904  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.277914  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:14.277922  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:14.277983  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:14.312698  165698 cri.go:89] found id: ""
	I0617 12:04:14.312724  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.312733  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:14.312739  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:14.312789  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:14.350952  165698 cri.go:89] found id: ""
	I0617 12:04:14.350977  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.350987  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:14.350993  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:14.351056  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:14.389211  165698 cri.go:89] found id: ""
	I0617 12:04:14.389235  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.389243  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:14.389250  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:14.389297  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:14.426171  165698 cri.go:89] found id: ""
	I0617 12:04:14.426200  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.426211  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:14.426224  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:14.426240  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:14.500403  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:14.500430  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:14.500446  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:14.588041  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:14.588078  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:14.631948  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:14.631987  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:14.681859  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:14.681895  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:17.198754  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:17.212612  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:17.212679  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:17.251011  165698 cri.go:89] found id: ""
	I0617 12:04:17.251041  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.251056  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:17.251065  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:17.251128  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:17.282964  165698 cri.go:89] found id: ""
	I0617 12:04:17.282989  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.282998  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:17.283003  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:17.283060  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:17.315570  165698 cri.go:89] found id: ""
	I0617 12:04:17.315601  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.315622  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:17.315630  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:17.315691  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:17.351186  165698 cri.go:89] found id: ""
	I0617 12:04:17.351212  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.351221  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:17.351228  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:17.351287  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:17.385609  165698 cri.go:89] found id: ""
	I0617 12:04:17.385653  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.385665  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:17.385673  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:17.385741  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:17.423890  165698 cri.go:89] found id: ""
	I0617 12:04:17.423923  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.423935  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:17.423944  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:17.424000  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:17.459543  165698 cri.go:89] found id: ""
	I0617 12:04:17.459575  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.459584  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:17.459592  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:17.459660  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:17.495554  165698 cri.go:89] found id: ""
	I0617 12:04:17.495584  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.495594  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:17.495606  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:17.495632  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:17.547835  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:17.547881  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:17.562391  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:17.562422  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:17.635335  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:17.635368  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:17.635387  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:17.708946  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:17.708988  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:20.249833  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:20.266234  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:20.266301  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:20.307380  165698 cri.go:89] found id: ""
	I0617 12:04:20.307415  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.307424  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:20.307431  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:20.307508  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:20.347193  165698 cri.go:89] found id: ""
	I0617 12:04:20.347225  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.347235  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:20.347243  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:20.347311  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:20.382673  165698 cri.go:89] found id: ""
	I0617 12:04:20.382711  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.382724  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:20.382732  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:20.382800  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:20.419542  165698 cri.go:89] found id: ""
	I0617 12:04:20.419573  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.419582  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:20.419588  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:20.419652  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:20.454586  165698 cri.go:89] found id: ""
	I0617 12:04:20.454618  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.454629  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:20.454636  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:20.454708  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:20.501094  165698 cri.go:89] found id: ""
	I0617 12:04:20.501123  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.501131  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:20.501137  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:20.501190  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:20.537472  165698 cri.go:89] found id: ""
	I0617 12:04:20.537512  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.537524  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:20.537532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:20.537597  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:20.571477  165698 cri.go:89] found id: ""
	I0617 12:04:20.571509  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.571519  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:20.571532  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:20.571550  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:20.611503  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:20.611540  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:20.663868  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:20.663905  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:20.677679  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:20.677704  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:20.753645  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:20.753663  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:20.753689  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:23.335535  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:23.349700  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:23.349766  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:23.384327  165698 cri.go:89] found id: ""
	I0617 12:04:23.384351  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.384358  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:23.384364  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:23.384417  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:23.427145  165698 cri.go:89] found id: ""
	I0617 12:04:23.427179  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.427190  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:23.427197  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:23.427254  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:23.461484  165698 cri.go:89] found id: ""
	I0617 12:04:23.461511  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.461522  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:23.461532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:23.461600  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:23.501292  165698 cri.go:89] found id: ""
	I0617 12:04:23.501324  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.501334  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:23.501342  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:23.501407  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:23.537605  165698 cri.go:89] found id: ""
	I0617 12:04:23.537639  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.537649  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:23.537654  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:23.537727  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:23.576580  165698 cri.go:89] found id: ""
	I0617 12:04:23.576608  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.576616  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:23.576623  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:23.576685  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:23.613124  165698 cri.go:89] found id: ""
	I0617 12:04:23.613153  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.613161  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:23.613167  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:23.613216  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:23.648662  165698 cri.go:89] found id: ""
	I0617 12:04:23.648688  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.648695  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:23.648705  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:23.648717  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:23.661737  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:23.661762  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:23.732512  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:23.732531  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:23.732547  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:23.810165  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:23.810207  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:23.855099  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:23.855136  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:26.406038  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:26.422243  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:26.422323  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:26.460959  165698 cri.go:89] found id: ""
	I0617 12:04:26.460984  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.460994  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:26.461002  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:26.461078  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:26.498324  165698 cri.go:89] found id: ""
	I0617 12:04:26.498350  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.498362  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:26.498370  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:26.498435  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:26.535299  165698 cri.go:89] found id: ""
	I0617 12:04:26.535335  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.535346  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:26.535354  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:26.535417  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:26.574623  165698 cri.go:89] found id: ""
	I0617 12:04:26.574657  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.574668  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:26.574677  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:26.574738  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:26.611576  165698 cri.go:89] found id: ""
	I0617 12:04:26.611607  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.611615  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:26.611621  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:26.611672  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:26.645664  165698 cri.go:89] found id: ""
	I0617 12:04:26.645692  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.645700  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:26.645706  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:26.645755  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:26.679442  165698 cri.go:89] found id: ""
	I0617 12:04:26.679477  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.679488  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:26.679495  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:26.679544  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:26.713512  165698 cri.go:89] found id: ""
	I0617 12:04:26.713543  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.713551  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:26.713563  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:26.713584  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:26.770823  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:26.770853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:26.784829  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:26.784858  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:26.868457  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:26.868480  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:26.868498  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:26.948522  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:26.948561  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:29.490891  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:29.504202  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:29.504273  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:29.544091  165698 cri.go:89] found id: ""
	I0617 12:04:29.544125  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.544137  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:29.544145  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:29.544203  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:29.581645  165698 cri.go:89] found id: ""
	I0617 12:04:29.581670  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.581679  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:29.581685  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:29.581736  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:29.621410  165698 cri.go:89] found id: ""
	I0617 12:04:29.621437  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.621447  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:29.621455  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:29.621522  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:29.659619  165698 cri.go:89] found id: ""
	I0617 12:04:29.659645  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.659654  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:29.659659  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:29.659718  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:29.698822  165698 cri.go:89] found id: ""
	I0617 12:04:29.698851  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.698859  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:29.698865  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:29.698957  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:29.741648  165698 cri.go:89] found id: ""
	I0617 12:04:29.741673  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.741680  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:29.741686  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:29.741752  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:29.777908  165698 cri.go:89] found id: ""
	I0617 12:04:29.777933  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.777941  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:29.777947  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:29.778013  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:29.812290  165698 cri.go:89] found id: ""
	I0617 12:04:29.812318  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.812328  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:29.812340  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:29.812357  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:29.857527  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:29.857552  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:29.916734  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:29.916776  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:29.930988  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:29.931013  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:30.006055  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:30.006080  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:30.006098  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:32.586549  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:32.600139  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:32.600262  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:32.641527  165698 cri.go:89] found id: ""
	I0617 12:04:32.641554  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.641570  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:32.641579  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:32.641635  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:32.687945  165698 cri.go:89] found id: ""
	I0617 12:04:32.687972  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.687981  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:32.687996  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:32.688068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:32.725586  165698 cri.go:89] found id: ""
	I0617 12:04:32.725618  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.725629  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:32.725639  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:32.725696  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:32.764042  165698 cri.go:89] found id: ""
	I0617 12:04:32.764090  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.764107  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:32.764115  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:32.764183  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:32.800132  165698 cri.go:89] found id: ""
	I0617 12:04:32.800167  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.800180  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:32.800189  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:32.800256  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:32.840313  165698 cri.go:89] found id: ""
	I0617 12:04:32.840348  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.840359  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:32.840367  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:32.840434  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:32.878041  165698 cri.go:89] found id: ""
	I0617 12:04:32.878067  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.878076  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:32.878082  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:32.878134  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:32.913904  165698 cri.go:89] found id: ""
	I0617 12:04:32.913939  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.913950  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:32.913961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:32.913974  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:32.987900  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:32.987929  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:32.987947  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:33.060919  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:33.060961  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:33.102602  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:33.102629  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:33.154112  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:33.154161  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:35.669336  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:35.682819  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:35.682907  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:35.717542  165698 cri.go:89] found id: ""
	I0617 12:04:35.717571  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.717579  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:35.717586  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:35.717646  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:35.754454  165698 cri.go:89] found id: ""
	I0617 12:04:35.754483  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.754495  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:35.754503  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:35.754566  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:35.791198  165698 cri.go:89] found id: ""
	I0617 12:04:35.791227  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.791237  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:35.791246  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:35.791309  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:35.826858  165698 cri.go:89] found id: ""
	I0617 12:04:35.826892  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.826903  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:35.826911  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:35.826974  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:35.866817  165698 cri.go:89] found id: ""
	I0617 12:04:35.866845  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.866853  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:35.866861  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:35.866909  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:35.918340  165698 cri.go:89] found id: ""
	I0617 12:04:35.918377  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.918388  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:35.918397  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:35.918466  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:35.960734  165698 cri.go:89] found id: ""
	I0617 12:04:35.960764  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.960774  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:35.960779  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:35.960841  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:36.002392  165698 cri.go:89] found id: ""
	I0617 12:04:36.002426  165698 logs.go:276] 0 containers: []
	W0617 12:04:36.002437  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:36.002449  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:36.002465  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:36.055130  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:36.055163  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:36.069181  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:36.069209  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:36.146078  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:36.146105  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:36.146120  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:36.223763  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:36.223797  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:38.767375  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:38.781301  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:38.781357  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:38.821364  165698 cri.go:89] found id: ""
	I0617 12:04:38.821390  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.821400  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:38.821409  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:38.821472  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:38.860727  165698 cri.go:89] found id: ""
	I0617 12:04:38.860784  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.860796  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:38.860803  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:38.860868  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:38.902932  165698 cri.go:89] found id: ""
	I0617 12:04:38.902968  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.902992  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:38.902999  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:38.903088  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:38.940531  165698 cri.go:89] found id: ""
	I0617 12:04:38.940564  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.940576  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:38.940584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:38.940649  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:38.975751  165698 cri.go:89] found id: ""
	I0617 12:04:38.975792  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.975827  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:38.975835  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:38.975908  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:39.011156  165698 cri.go:89] found id: ""
	I0617 12:04:39.011196  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.011206  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:39.011213  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:39.011269  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:39.049266  165698 cri.go:89] found id: ""
	I0617 12:04:39.049301  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.049312  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:39.049320  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:39.049373  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:39.089392  165698 cri.go:89] found id: ""
	I0617 12:04:39.089425  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.089434  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:39.089444  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:39.089459  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:39.166585  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:39.166607  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:39.166619  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:39.241910  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:39.241950  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:39.287751  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:39.287782  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:39.342226  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:39.342259  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:41.857327  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:41.871379  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:41.871446  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:41.907435  165698 cri.go:89] found id: ""
	I0617 12:04:41.907472  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.907483  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:41.907492  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:41.907542  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:41.941684  165698 cri.go:89] found id: ""
	I0617 12:04:41.941725  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.941737  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:41.941745  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:41.941819  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:41.977359  165698 cri.go:89] found id: ""
	I0617 12:04:41.977395  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.977407  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:41.977415  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:41.977478  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:42.015689  165698 cri.go:89] found id: ""
	I0617 12:04:42.015723  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.015734  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:42.015742  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:42.015803  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:42.050600  165698 cri.go:89] found id: ""
	I0617 12:04:42.050626  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.050637  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:42.050645  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:42.050707  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:42.088174  165698 cri.go:89] found id: ""
	I0617 12:04:42.088201  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.088212  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:42.088221  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:42.088290  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:42.127335  165698 cri.go:89] found id: ""
	I0617 12:04:42.127364  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.127375  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:42.127384  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:42.127443  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:42.163435  165698 cri.go:89] found id: ""
	I0617 12:04:42.163481  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.163492  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:42.163505  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:42.163527  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:42.233233  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:42.233262  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:42.233280  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:42.311695  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:42.311741  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:42.378134  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:42.378163  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:42.439614  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:42.439647  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:44.953738  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:44.967822  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:44.967884  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:45.004583  165698 cri.go:89] found id: ""
	I0617 12:04:45.004687  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.004732  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:45.004741  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:45.004797  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:45.038912  165698 cri.go:89] found id: ""
	I0617 12:04:45.038939  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.038949  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:45.038957  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:45.039026  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:45.073594  165698 cri.go:89] found id: ""
	I0617 12:04:45.073620  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.073628  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:45.073634  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:45.073684  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:45.108225  165698 cri.go:89] found id: ""
	I0617 12:04:45.108253  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.108261  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:45.108267  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:45.108317  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:45.139522  165698 cri.go:89] found id: ""
	I0617 12:04:45.139545  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.139553  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:45.139559  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:45.139609  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:45.173705  165698 cri.go:89] found id: ""
	I0617 12:04:45.173735  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.173745  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:45.173752  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:45.173813  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:45.206448  165698 cri.go:89] found id: ""
	I0617 12:04:45.206477  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.206486  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:45.206493  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:45.206551  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:45.242925  165698 cri.go:89] found id: ""
	I0617 12:04:45.242952  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.242962  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:45.242981  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:45.242998  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:45.294669  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:45.294700  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:45.307642  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:45.307670  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:45.381764  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:45.381788  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:45.381805  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:45.469022  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:45.469056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:48.014169  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:48.029895  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:48.029984  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:48.086421  165698 cri.go:89] found id: ""
	I0617 12:04:48.086456  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.086468  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:48.086477  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:48.086554  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:48.135673  165698 cri.go:89] found id: ""
	I0617 12:04:48.135705  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.135713  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:48.135733  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:48.135808  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:48.184330  165698 cri.go:89] found id: ""
	I0617 12:04:48.184353  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.184362  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:48.184368  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:48.184418  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:48.221064  165698 cri.go:89] found id: ""
	I0617 12:04:48.221095  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.221103  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:48.221112  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:48.221175  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:48.264464  165698 cri.go:89] found id: ""
	I0617 12:04:48.264495  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.264502  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:48.264508  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:48.264561  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:48.302144  165698 cri.go:89] found id: ""
	I0617 12:04:48.302180  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.302191  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:48.302199  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:48.302263  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:48.345431  165698 cri.go:89] found id: ""
	I0617 12:04:48.345458  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.345465  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:48.345472  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:48.345539  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:48.383390  165698 cri.go:89] found id: ""
	I0617 12:04:48.383423  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.383434  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:48.383447  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:48.383478  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:48.422328  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:48.422356  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:48.473698  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:48.473735  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:48.488399  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:48.488429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:48.566851  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:48.566871  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:48.566884  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:51.149626  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:51.162855  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:51.162926  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:51.199056  165698 cri.go:89] found id: ""
	I0617 12:04:51.199091  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.199102  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:51.199109  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:51.199172  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:51.238773  165698 cri.go:89] found id: ""
	I0617 12:04:51.238810  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.238821  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:51.238827  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:51.238883  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:51.279049  165698 cri.go:89] found id: ""
	I0617 12:04:51.279079  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.279092  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:51.279100  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:51.279166  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:51.324923  165698 cri.go:89] found id: ""
	I0617 12:04:51.324957  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.324969  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:51.324976  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:51.325028  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:51.363019  165698 cri.go:89] found id: ""
	I0617 12:04:51.363055  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.363068  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:51.363077  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:51.363142  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:51.399620  165698 cri.go:89] found id: ""
	I0617 12:04:51.399652  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.399661  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:51.399675  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:51.399758  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:51.434789  165698 cri.go:89] found id: ""
	I0617 12:04:51.434824  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.434836  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:51.434844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:51.434910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:51.470113  165698 cri.go:89] found id: ""
	I0617 12:04:51.470140  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.470149  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:51.470160  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:51.470176  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:51.526138  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:51.526173  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:51.539451  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:51.539491  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:51.613418  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:51.613437  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:51.613450  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:51.691971  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:51.692010  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:54.234514  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:54.249636  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:54.249724  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:54.283252  165698 cri.go:89] found id: ""
	I0617 12:04:54.283287  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.283300  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:54.283307  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:54.283367  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:54.319153  165698 cri.go:89] found id: ""
	I0617 12:04:54.319207  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.319218  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:54.319226  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:54.319290  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:54.361450  165698 cri.go:89] found id: ""
	I0617 12:04:54.361480  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.361491  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:54.361498  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:54.361562  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:54.397806  165698 cri.go:89] found id: ""
	I0617 12:04:54.397834  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.397843  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:54.397849  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:54.397899  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:54.447119  165698 cri.go:89] found id: ""
	I0617 12:04:54.447147  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.447155  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:54.447161  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:54.447211  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:54.489717  165698 cri.go:89] found id: ""
	I0617 12:04:54.489751  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.489760  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:54.489766  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:54.489830  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:54.532840  165698 cri.go:89] found id: ""
	I0617 12:04:54.532943  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.532975  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:54.532989  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:54.533100  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:54.568227  165698 cri.go:89] found id: ""
	I0617 12:04:54.568369  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.568391  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:54.568403  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:54.568420  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:54.583140  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:54.583174  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:54.661258  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:54.661281  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:54.661296  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:54.750472  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:54.750511  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:54.797438  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:54.797467  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:57.349800  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:57.364820  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:57.364879  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:57.405065  165698 cri.go:89] found id: ""
	I0617 12:04:57.405093  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.405101  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:57.405106  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:57.405153  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:57.445707  165698 cri.go:89] found id: ""
	I0617 12:04:57.445741  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.445752  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:57.445760  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:57.445829  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:57.486911  165698 cri.go:89] found id: ""
	I0617 12:04:57.486940  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.486948  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:57.486955  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:57.487014  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:57.521218  165698 cri.go:89] found id: ""
	I0617 12:04:57.521254  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.521266  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:57.521274  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:57.521342  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:57.555762  165698 cri.go:89] found id: ""
	I0617 12:04:57.555794  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.555803  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:57.555808  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:57.555863  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:57.591914  165698 cri.go:89] found id: ""
	I0617 12:04:57.591945  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.591956  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:57.591971  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:57.592037  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:57.626435  165698 cri.go:89] found id: ""
	I0617 12:04:57.626463  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.626471  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:57.626477  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:57.626527  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:57.665088  165698 cri.go:89] found id: ""
	I0617 12:04:57.665118  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.665126  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:57.665137  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:57.665152  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:57.716284  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:57.716316  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:57.730179  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:57.730204  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:57.808904  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:57.808933  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:57.808954  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:57.894499  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:57.894530  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:00.435957  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:00.450812  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:00.450890  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:00.491404  165698 cri.go:89] found id: ""
	I0617 12:05:00.491432  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.491440  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:00.491446  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:00.491523  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:00.526711  165698 cri.go:89] found id: ""
	I0617 12:05:00.526739  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.526747  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:00.526753  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:00.526817  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:00.562202  165698 cri.go:89] found id: ""
	I0617 12:05:00.562236  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.562246  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:00.562255  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:00.562323  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:00.602754  165698 cri.go:89] found id: ""
	I0617 12:05:00.602790  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.602802  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:00.602811  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:00.602877  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:00.645666  165698 cri.go:89] found id: ""
	I0617 12:05:00.645703  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.645715  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:00.645723  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:00.645788  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:00.684649  165698 cri.go:89] found id: ""
	I0617 12:05:00.684685  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.684694  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:00.684701  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:00.684784  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:00.727139  165698 cri.go:89] found id: ""
	I0617 12:05:00.727160  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.727167  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:00.727173  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:00.727238  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:00.764401  165698 cri.go:89] found id: ""
	I0617 12:05:00.764433  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.764444  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:00.764455  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:00.764474  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:00.777301  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:00.777322  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:00.849752  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:00.849778  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:00.849795  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:00.930220  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:00.930266  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:00.970076  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:00.970116  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:03.526070  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:03.541150  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:03.541229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:03.584416  165698 cri.go:89] found id: ""
	I0617 12:05:03.584451  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.584463  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:03.584472  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:03.584535  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:03.623509  165698 cri.go:89] found id: ""
	I0617 12:05:03.623543  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.623552  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:03.623558  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:03.623611  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:03.661729  165698 cri.go:89] found id: ""
	I0617 12:05:03.661765  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.661778  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:03.661787  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:03.661852  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:03.702952  165698 cri.go:89] found id: ""
	I0617 12:05:03.702985  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.703008  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:03.703033  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:03.703100  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:03.746534  165698 cri.go:89] found id: ""
	I0617 12:05:03.746570  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.746578  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:03.746584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:03.746648  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:03.784472  165698 cri.go:89] found id: ""
	I0617 12:05:03.784506  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.784515  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:03.784522  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:03.784580  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:03.821033  165698 cri.go:89] found id: ""
	I0617 12:05:03.821066  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.821077  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:03.821085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:03.821146  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:03.859438  165698 cri.go:89] found id: ""
	I0617 12:05:03.859474  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.859487  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:03.859497  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:03.859513  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:03.940723  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:03.940770  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:03.986267  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:03.986303  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:04.037999  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:04.038039  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:04.051382  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:04.051415  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:04.121593  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:06.622475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:06.636761  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:06.636842  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:06.673954  165698 cri.go:89] found id: ""
	I0617 12:05:06.673995  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.674007  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:06.674015  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:06.674084  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:06.708006  165698 cri.go:89] found id: ""
	I0617 12:05:06.708037  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.708047  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:06.708055  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:06.708124  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:06.743819  165698 cri.go:89] found id: ""
	I0617 12:05:06.743852  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.743864  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:06.743872  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:06.743934  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:06.781429  165698 cri.go:89] found id: ""
	I0617 12:05:06.781457  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.781465  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:06.781473  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:06.781540  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:06.818404  165698 cri.go:89] found id: ""
	I0617 12:05:06.818435  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.818447  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:06.818456  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:06.818516  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:06.857880  165698 cri.go:89] found id: ""
	I0617 12:05:06.857913  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.857924  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:06.857933  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:06.857993  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:06.893010  165698 cri.go:89] found id: ""
	I0617 12:05:06.893050  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.893059  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:06.893065  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:06.893118  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:06.926302  165698 cri.go:89] found id: ""
	I0617 12:05:06.926336  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.926347  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:06.926360  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:06.926378  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:06.997173  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:06.997197  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:06.997215  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:07.082843  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:07.082885  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:07.122542  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:07.122572  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:07.177033  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:07.177070  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:09.693217  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:09.707043  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:09.707110  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:09.742892  165698 cri.go:89] found id: ""
	I0617 12:05:09.742918  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.742927  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:09.742933  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:09.742982  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:09.776938  165698 cri.go:89] found id: ""
	I0617 12:05:09.776969  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.776976  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:09.776982  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:09.777030  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:09.813613  165698 cri.go:89] found id: ""
	I0617 12:05:09.813643  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.813651  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:09.813658  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:09.813705  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:09.855483  165698 cri.go:89] found id: ""
	I0617 12:05:09.855516  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.855525  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:09.855532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:09.855596  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:09.890808  165698 cri.go:89] found id: ""
	I0617 12:05:09.890844  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.890854  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:09.890862  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:09.890930  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:09.927656  165698 cri.go:89] found id: ""
	I0617 12:05:09.927684  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.927693  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:09.927703  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:09.927758  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:09.968130  165698 cri.go:89] found id: ""
	I0617 12:05:09.968163  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.968174  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:09.968183  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:09.968239  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:10.010197  165698 cri.go:89] found id: ""
	I0617 12:05:10.010220  165698 logs.go:276] 0 containers: []
	W0617 12:05:10.010228  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:10.010239  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:10.010252  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:10.063999  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:10.064040  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:10.078837  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:10.078873  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:10.155932  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:10.155954  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:10.155967  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:10.232859  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:10.232901  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:12.772943  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:12.787936  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:12.788024  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:12.828457  165698 cri.go:89] found id: ""
	I0617 12:05:12.828483  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.828491  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:12.828498  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:12.828562  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:12.862265  165698 cri.go:89] found id: ""
	I0617 12:05:12.862296  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.862306  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:12.862313  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:12.862372  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:12.899673  165698 cri.go:89] found id: ""
	I0617 12:05:12.899698  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.899706  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:12.899712  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:12.899759  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:12.943132  165698 cri.go:89] found id: ""
	I0617 12:05:12.943161  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.943169  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:12.943175  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:12.943227  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:12.985651  165698 cri.go:89] found id: ""
	I0617 12:05:12.985677  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.985685  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:12.985691  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:12.985747  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:13.021484  165698 cri.go:89] found id: ""
	I0617 12:05:13.021508  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.021516  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:13.021522  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:13.021569  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:13.060658  165698 cri.go:89] found id: ""
	I0617 12:05:13.060689  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.060705  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:13.060713  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:13.060782  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:13.106008  165698 cri.go:89] found id: ""
	I0617 12:05:13.106041  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.106052  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:13.106066  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:13.106083  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:13.160199  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:13.160231  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:13.173767  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:13.173804  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:13.245358  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:13.245383  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:13.245399  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:13.323046  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:13.323085  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:15.872024  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:15.885550  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:15.885624  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:15.920303  165698 cri.go:89] found id: ""
	I0617 12:05:15.920332  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.920344  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:15.920358  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:15.920423  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:15.955132  165698 cri.go:89] found id: ""
	I0617 12:05:15.955158  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.955166  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:15.955172  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:15.955220  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:15.992995  165698 cri.go:89] found id: ""
	I0617 12:05:15.993034  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.993053  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:15.993060  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:15.993127  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:16.032603  165698 cri.go:89] found id: ""
	I0617 12:05:16.032638  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.032650  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:16.032658  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:16.032716  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:16.071770  165698 cri.go:89] found id: ""
	I0617 12:05:16.071804  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.071816  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:16.071824  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:16.071899  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:16.106172  165698 cri.go:89] found id: ""
	I0617 12:05:16.106206  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.106218  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:16.106226  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:16.106292  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:16.139406  165698 cri.go:89] found id: ""
	I0617 12:05:16.139436  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.139443  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:16.139449  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:16.139517  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:16.174513  165698 cri.go:89] found id: ""
	I0617 12:05:16.174554  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.174565  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:16.174580  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:16.174597  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:16.240912  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:16.240940  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:16.240958  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:16.323853  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:16.323891  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:16.372632  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:16.372659  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:16.428367  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:16.428406  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:18.943551  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:18.957394  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:18.957490  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:18.991967  165698 cri.go:89] found id: ""
	I0617 12:05:18.992006  165698 logs.go:276] 0 containers: []
	W0617 12:05:18.992017  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:18.992027  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:18.992092  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:19.025732  165698 cri.go:89] found id: ""
	I0617 12:05:19.025763  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.025775  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:19.025783  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:19.025856  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:19.061786  165698 cri.go:89] found id: ""
	I0617 12:05:19.061820  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.061830  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:19.061838  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:19.061906  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:19.098819  165698 cri.go:89] found id: ""
	I0617 12:05:19.098856  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.098868  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:19.098876  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:19.098947  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:19.139840  165698 cri.go:89] found id: ""
	I0617 12:05:19.139877  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.139886  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:19.139894  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:19.139965  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:19.176546  165698 cri.go:89] found id: ""
	I0617 12:05:19.176578  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.176590  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:19.176598  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:19.176671  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:19.209948  165698 cri.go:89] found id: ""
	I0617 12:05:19.209985  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.209997  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:19.210005  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:19.210087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:19.246751  165698 cri.go:89] found id: ""
	I0617 12:05:19.246788  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.246799  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:19.246812  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:19.246830  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:19.322272  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:19.322316  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:19.370147  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:19.370187  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:19.422699  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:19.422749  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:19.437255  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:19.437284  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:19.510077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:22.010840  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:22.024791  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:22.024879  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:22.060618  165698 cri.go:89] found id: ""
	I0617 12:05:22.060658  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.060667  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:22.060674  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:22.060742  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:22.100228  165698 cri.go:89] found id: ""
	I0617 12:05:22.100259  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.100268  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:22.100274  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:22.100343  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:22.135629  165698 cri.go:89] found id: ""
	I0617 12:05:22.135657  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.135665  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:22.135671  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:22.135730  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:22.186027  165698 cri.go:89] found id: ""
	I0617 12:05:22.186064  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.186076  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:22.186085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:22.186148  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:22.220991  165698 cri.go:89] found id: ""
	I0617 12:05:22.221019  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.221029  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:22.221037  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:22.221104  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:22.266306  165698 cri.go:89] found id: ""
	I0617 12:05:22.266337  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.266348  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:22.266357  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:22.266414  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:22.303070  165698 cri.go:89] found id: ""
	I0617 12:05:22.303104  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.303116  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:22.303124  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:22.303190  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:22.339792  165698 cri.go:89] found id: ""
	I0617 12:05:22.339819  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.339829  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:22.339840  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:22.339856  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:22.422360  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:22.422397  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:22.465744  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:22.465777  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:22.516199  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:22.516232  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:22.529961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:22.529983  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:22.601519  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:25.102655  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:25.116893  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:25.116959  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:25.156370  165698 cri.go:89] found id: ""
	I0617 12:05:25.156396  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.156404  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:25.156410  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:25.156468  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:25.193123  165698 cri.go:89] found id: ""
	I0617 12:05:25.193199  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.193221  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:25.193234  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:25.193301  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:25.232182  165698 cri.go:89] found id: ""
	I0617 12:05:25.232209  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.232219  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:25.232227  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:25.232285  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:25.266599  165698 cri.go:89] found id: ""
	I0617 12:05:25.266630  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.266639  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:25.266645  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:25.266701  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:25.308732  165698 cri.go:89] found id: ""
	I0617 12:05:25.308762  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.308770  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:25.308776  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:25.308836  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:25.348817  165698 cri.go:89] found id: ""
	I0617 12:05:25.348858  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.348871  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:25.348879  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:25.348946  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:25.389343  165698 cri.go:89] found id: ""
	I0617 12:05:25.389375  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.389387  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:25.389393  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:25.389452  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:25.427014  165698 cri.go:89] found id: ""
	I0617 12:05:25.427043  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.427055  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:25.427067  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:25.427083  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:25.441361  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:25.441390  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:25.518967  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:25.518993  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:25.519006  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:25.601411  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:25.601450  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:25.651636  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:25.651674  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:28.202148  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:28.215710  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:28.215792  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:28.254961  165698 cri.go:89] found id: ""
	I0617 12:05:28.254986  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.255000  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:28.255007  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:28.255061  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:28.292574  165698 cri.go:89] found id: ""
	I0617 12:05:28.292606  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.292614  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:28.292620  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:28.292683  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:28.329036  165698 cri.go:89] found id: ""
	I0617 12:05:28.329067  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.329077  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:28.329085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:28.329152  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:28.366171  165698 cri.go:89] found id: ""
	I0617 12:05:28.366197  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.366206  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:28.366212  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:28.366273  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:28.401380  165698 cri.go:89] found id: ""
	I0617 12:05:28.401407  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.401417  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:28.401424  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:28.401486  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:28.438767  165698 cri.go:89] found id: ""
	I0617 12:05:28.438798  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.438810  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:28.438817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:28.438876  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:28.472706  165698 cri.go:89] found id: ""
	I0617 12:05:28.472761  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.472772  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:28.472779  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:28.472829  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:28.509525  165698 cri.go:89] found id: ""
	I0617 12:05:28.509548  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.509556  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:28.509565  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:28.509577  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:28.606008  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:28.606059  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:28.665846  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:28.665874  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:28.721599  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:28.721627  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:28.735040  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:28.735062  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:28.811954  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:31.312554  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:31.326825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:31.326905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:31.364862  165698 cri.go:89] found id: ""
	I0617 12:05:31.364891  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.364902  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:31.364910  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:31.364976  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:31.396979  165698 cri.go:89] found id: ""
	I0617 12:05:31.397013  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.397027  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:31.397035  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:31.397098  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:31.430617  165698 cri.go:89] found id: ""
	I0617 12:05:31.430647  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.430657  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:31.430665  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:31.430728  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:31.462308  165698 cri.go:89] found id: ""
	I0617 12:05:31.462338  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.462345  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:31.462350  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:31.462399  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:31.495406  165698 cri.go:89] found id: ""
	I0617 12:05:31.495435  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.495444  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:31.495452  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:31.495553  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:31.538702  165698 cri.go:89] found id: ""
	I0617 12:05:31.538729  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.538739  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:31.538750  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:31.538813  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:31.572637  165698 cri.go:89] found id: ""
	I0617 12:05:31.572666  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.572677  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:31.572685  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:31.572745  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:31.609307  165698 cri.go:89] found id: ""
	I0617 12:05:31.609341  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.609352  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:31.609364  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:31.609380  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:31.622445  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:31.622471  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:31.699170  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:31.699191  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:31.699209  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:31.775115  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:31.775156  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:31.815836  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:31.815866  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:34.372097  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:34.393542  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:34.393607  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:34.437265  165698 cri.go:89] found id: ""
	I0617 12:05:34.437294  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.437305  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:34.437314  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:34.437382  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:34.474566  165698 cri.go:89] found id: ""
	I0617 12:05:34.474596  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.474609  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:34.474617  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:34.474680  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:34.510943  165698 cri.go:89] found id: ""
	I0617 12:05:34.510975  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.510986  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:34.511000  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:34.511072  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:34.548124  165698 cri.go:89] found id: ""
	I0617 12:05:34.548160  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.548172  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:34.548179  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:34.548241  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:34.582428  165698 cri.go:89] found id: ""
	I0617 12:05:34.582453  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.582460  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:34.582467  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:34.582514  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:34.616895  165698 cri.go:89] found id: ""
	I0617 12:05:34.616937  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.616950  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:34.616957  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:34.617019  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:34.656116  165698 cri.go:89] found id: ""
	I0617 12:05:34.656144  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.656155  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:34.656162  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:34.656226  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:34.695649  165698 cri.go:89] found id: ""
	I0617 12:05:34.695680  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.695692  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:34.695705  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:34.695722  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:34.747910  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:34.747956  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:34.762177  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:34.762206  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:34.840395  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:34.840423  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:34.840440  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:34.922962  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:34.923002  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:37.464659  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:37.480351  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:37.480416  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:37.521249  165698 cri.go:89] found id: ""
	I0617 12:05:37.521279  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.521286  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:37.521293  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:37.521340  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:37.561053  165698 cri.go:89] found id: ""
	I0617 12:05:37.561079  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.561087  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:37.561094  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:37.561151  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:37.599019  165698 cri.go:89] found id: ""
	I0617 12:05:37.599057  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.599066  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:37.599074  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:37.599134  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:37.638276  165698 cri.go:89] found id: ""
	I0617 12:05:37.638304  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.638315  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:37.638323  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:37.638389  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:37.677819  165698 cri.go:89] found id: ""
	I0617 12:05:37.677845  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.677853  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:37.677859  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:37.677910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:37.715850  165698 cri.go:89] found id: ""
	I0617 12:05:37.715877  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.715888  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:37.715897  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:37.715962  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:37.755533  165698 cri.go:89] found id: ""
	I0617 12:05:37.755563  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.755570  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:37.755576  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:37.755636  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:37.791826  165698 cri.go:89] found id: ""
	I0617 12:05:37.791850  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.791859  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:37.791872  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:37.791888  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:37.844824  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:37.844853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:37.860933  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:37.860963  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:37.926497  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:37.926519  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:37.926535  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:38.003814  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:38.003853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:40.546386  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:40.560818  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:40.560896  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:40.596737  165698 cri.go:89] found id: ""
	I0617 12:05:40.596777  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.596784  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:40.596791  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:40.596844  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:40.631518  165698 cri.go:89] found id: ""
	I0617 12:05:40.631556  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.631570  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:40.631611  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:40.631683  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:40.674962  165698 cri.go:89] found id: ""
	I0617 12:05:40.674997  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.675006  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:40.675012  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:40.675064  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:40.716181  165698 cri.go:89] found id: ""
	I0617 12:05:40.716210  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.716218  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:40.716226  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:40.716286  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:40.756312  165698 cri.go:89] found id: ""
	I0617 12:05:40.756339  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.756348  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:40.756353  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:40.756406  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:40.791678  165698 cri.go:89] found id: ""
	I0617 12:05:40.791733  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.791750  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:40.791759  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:40.791830  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:40.830717  165698 cri.go:89] found id: ""
	I0617 12:05:40.830754  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.830766  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:40.830774  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:40.830854  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:40.868139  165698 cri.go:89] found id: ""
	I0617 12:05:40.868169  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.868178  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:40.868198  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:40.868224  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:40.920319  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:40.920353  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:40.934948  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:40.934974  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:41.005349  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:41.005371  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:41.005388  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:41.086783  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:41.086842  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:43.625515  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:43.638942  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:43.639019  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:43.673703  165698 cri.go:89] found id: ""
	I0617 12:05:43.673735  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.673747  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:43.673756  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:43.673822  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:43.709417  165698 cri.go:89] found id: ""
	I0617 12:05:43.709449  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.709460  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:43.709468  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:43.709529  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:43.742335  165698 cri.go:89] found id: ""
	I0617 12:05:43.742368  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.742379  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:43.742389  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:43.742449  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:43.779112  165698 cri.go:89] found id: ""
	I0617 12:05:43.779141  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.779150  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:43.779155  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:43.779219  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:43.813362  165698 cri.go:89] found id: ""
	I0617 12:05:43.813397  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.813406  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:43.813414  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:43.813464  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:43.850456  165698 cri.go:89] found id: ""
	I0617 12:05:43.850484  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.850493  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:43.850499  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:43.850547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:43.884527  165698 cri.go:89] found id: ""
	I0617 12:05:43.884555  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.884564  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:43.884571  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:43.884632  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:43.921440  165698 cri.go:89] found id: ""
	I0617 12:05:43.921476  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.921488  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:43.921501  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:43.921517  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:43.973687  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:43.973727  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:43.988114  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:43.988143  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:44.055084  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:44.055119  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:44.055138  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:44.134628  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:44.134665  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:46.677852  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:46.690688  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:46.690747  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:46.724055  165698 cri.go:89] found id: ""
	I0617 12:05:46.724090  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.724101  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:46.724110  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:46.724171  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:46.759119  165698 cri.go:89] found id: ""
	I0617 12:05:46.759150  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.759161  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:46.759169  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:46.759227  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:46.796392  165698 cri.go:89] found id: ""
	I0617 12:05:46.796424  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.796435  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:46.796442  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:46.796504  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:46.831727  165698 cri.go:89] found id: ""
	I0617 12:05:46.831761  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.831770  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:46.831777  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:46.831845  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:46.866662  165698 cri.go:89] found id: ""
	I0617 12:05:46.866693  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.866702  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:46.866708  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:46.866757  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:46.905045  165698 cri.go:89] found id: ""
	I0617 12:05:46.905070  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.905078  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:46.905084  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:46.905130  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:46.940879  165698 cri.go:89] found id: ""
	I0617 12:05:46.940907  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.940915  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:46.940926  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:46.940974  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:46.977247  165698 cri.go:89] found id: ""
	I0617 12:05:46.977290  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.977301  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:46.977314  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:46.977331  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:47.046094  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:47.046116  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:47.046133  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:47.122994  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:47.123038  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:47.166273  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:47.166313  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:47.221392  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:47.221429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:49.739113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:49.752880  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:49.753004  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:49.791177  165698 cri.go:89] found id: ""
	I0617 12:05:49.791218  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.791242  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:49.791251  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:49.791322  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:49.831602  165698 cri.go:89] found id: ""
	I0617 12:05:49.831633  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.831644  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:49.831652  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:49.831719  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:49.870962  165698 cri.go:89] found id: ""
	I0617 12:05:49.870998  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.871011  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:49.871019  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:49.871092  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:49.917197  165698 cri.go:89] found id: ""
	I0617 12:05:49.917232  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.917243  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:49.917252  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:49.917320  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:49.952997  165698 cri.go:89] found id: ""
	I0617 12:05:49.953034  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.953047  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:49.953056  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:49.953114  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:50.001925  165698 cri.go:89] found id: ""
	I0617 12:05:50.001965  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.001977  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:50.001986  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:50.002059  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:50.043374  165698 cri.go:89] found id: ""
	I0617 12:05:50.043403  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.043412  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:50.043419  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:50.043496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:50.082974  165698 cri.go:89] found id: ""
	I0617 12:05:50.083009  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.083020  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:50.083029  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:50.083043  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:50.134116  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:50.134159  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:50.148478  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:50.148511  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:50.227254  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:50.227276  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:50.227288  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:50.305920  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:50.305960  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:52.848811  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:52.862612  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:52.862669  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:52.896379  165698 cri.go:89] found id: ""
	I0617 12:05:52.896410  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.896421  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:52.896429  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:52.896488  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:52.933387  165698 cri.go:89] found id: ""
	I0617 12:05:52.933422  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.933432  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:52.933439  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:52.933501  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:52.971055  165698 cri.go:89] found id: ""
	I0617 12:05:52.971091  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.971102  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:52.971110  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:52.971168  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:53.003815  165698 cri.go:89] found id: ""
	I0617 12:05:53.003846  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.003857  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:53.003864  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:53.003927  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:53.039133  165698 cri.go:89] found id: ""
	I0617 12:05:53.039161  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.039169  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:53.039175  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:53.039229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:53.077703  165698 cri.go:89] found id: ""
	I0617 12:05:53.077756  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.077773  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:53.077780  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:53.077831  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:53.119187  165698 cri.go:89] found id: ""
	I0617 12:05:53.119216  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.119223  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:53.119230  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:53.119287  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:53.154423  165698 cri.go:89] found id: ""
	I0617 12:05:53.154457  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.154467  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:53.154480  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:53.154496  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:53.202745  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:53.202778  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:53.216510  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:53.216537  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:53.295687  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:53.295712  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:53.295732  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:53.375064  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:53.375095  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:55.915113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:55.929155  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:55.929239  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:55.964589  165698 cri.go:89] found id: ""
	I0617 12:05:55.964625  165698 logs.go:276] 0 containers: []
	W0617 12:05:55.964634  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:55.964640  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:55.964702  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:56.003659  165698 cri.go:89] found id: ""
	I0617 12:05:56.003691  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.003701  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:56.003709  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:56.003778  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:56.039674  165698 cri.go:89] found id: ""
	I0617 12:05:56.039707  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.039717  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:56.039724  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:56.039786  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:56.077695  165698 cri.go:89] found id: ""
	I0617 12:05:56.077736  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.077748  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:56.077756  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:56.077826  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:56.116397  165698 cri.go:89] found id: ""
	I0617 12:05:56.116430  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.116442  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:56.116451  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:56.116512  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:56.152395  165698 cri.go:89] found id: ""
	I0617 12:05:56.152433  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.152445  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:56.152454  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:56.152513  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:56.189740  165698 cri.go:89] found id: ""
	I0617 12:05:56.189776  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.189788  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:56.189796  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:56.189866  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:56.228017  165698 cri.go:89] found id: ""
	I0617 12:05:56.228047  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.228055  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:56.228063  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:56.228076  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:56.279032  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:56.279079  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:56.294369  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:56.294394  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:56.369507  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:56.369535  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:56.369551  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:56.454797  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:56.454833  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:58.995221  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:59.008481  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:59.008555  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:59.043854  165698 cri.go:89] found id: ""
	I0617 12:05:59.043887  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.043914  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:59.043935  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:59.044003  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:59.081488  165698 cri.go:89] found id: ""
	I0617 12:05:59.081522  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.081530  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:59.081537  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:59.081596  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:59.118193  165698 cri.go:89] found id: ""
	I0617 12:05:59.118222  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.118232  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:59.118240  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:59.118306  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:59.150286  165698 cri.go:89] found id: ""
	I0617 12:05:59.150315  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.150327  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:59.150335  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:59.150381  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:59.191426  165698 cri.go:89] found id: ""
	I0617 12:05:59.191450  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.191485  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:59.191493  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:59.191547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:59.224933  165698 cri.go:89] found id: ""
	I0617 12:05:59.224965  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.224974  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:59.224998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:59.225061  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:59.255929  165698 cri.go:89] found id: ""
	I0617 12:05:59.255956  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.255965  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:59.255971  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:59.256025  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:59.293072  165698 cri.go:89] found id: ""
	I0617 12:05:59.293097  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.293104  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:59.293114  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:59.293126  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:59.354240  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:59.354267  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:59.367715  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:59.367744  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:59.446352  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:59.446381  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:59.446396  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:59.528701  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:59.528738  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:02.071616  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:02.088050  165698 kubeadm.go:591] duration metric: took 4m3.493743262s to restartPrimaryControlPlane
	W0617 12:06:02.088159  165698 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0617 12:06:02.088194  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:06:02.552133  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:02.570136  165698 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:06:02.582299  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:06:02.594775  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:06:02.594809  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:06:02.594867  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:06:02.605875  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:06:02.605954  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:06:02.617780  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:06:02.628284  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:06:02.628359  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:06:02.639128  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:06:02.650079  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:06:02.650144  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:06:02.660879  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:06:02.671170  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:06:02.671249  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:06:02.682071  165698 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:06:02.753750  165698 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0617 12:06:02.753913  165698 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:06:02.897384  165698 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:06:02.897530  165698 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:06:02.897685  165698 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:06:03.079116  165698 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:06:03.080903  165698 out.go:204]   - Generating certificates and keys ...
	I0617 12:06:03.081006  165698 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:06:03.081080  165698 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:06:03.081168  165698 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:06:03.081250  165698 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:06:03.081377  165698 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:06:03.081457  165698 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:06:03.082418  165698 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:06:03.083003  165698 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:06:03.083917  165698 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:06:03.084820  165698 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:06:03.085224  165698 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:06:03.085307  165698 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:06:03.203342  165698 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:06:03.430428  165698 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:06:03.570422  165698 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:06:03.772092  165698 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:06:03.793105  165698 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:06:03.793206  165698 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:06:03.793261  165698 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:06:03.919738  165698 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:06:03.921593  165698 out.go:204]   - Booting up control plane ...
	I0617 12:06:03.921708  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:06:03.928168  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:06:03.928279  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:06:03.937197  165698 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:06:03.939967  165698 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 12:06:43.941225  165698 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0617 12:06:43.941341  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:43.941612  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:06:48.942159  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:48.942434  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:06:58.942977  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:58.943290  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:18.944149  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:07:18.944368  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:58.946943  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:07:58.947220  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:58.947233  165698 kubeadm.go:309] 
	I0617 12:07:58.947316  165698 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0617 12:07:58.947393  165698 kubeadm.go:309] 		timed out waiting for the condition
	I0617 12:07:58.947406  165698 kubeadm.go:309] 
	I0617 12:07:58.947449  165698 kubeadm.go:309] 	This error is likely caused by:
	I0617 12:07:58.947528  165698 kubeadm.go:309] 		- The kubelet is not running
	I0617 12:07:58.947690  165698 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0617 12:07:58.947699  165698 kubeadm.go:309] 
	I0617 12:07:58.947860  165698 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0617 12:07:58.947924  165698 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0617 12:07:58.947976  165698 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0617 12:07:58.947991  165698 kubeadm.go:309] 
	I0617 12:07:58.948132  165698 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0617 12:07:58.948247  165698 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0617 12:07:58.948260  165698 kubeadm.go:309] 
	I0617 12:07:58.948406  165698 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0617 12:07:58.948539  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0617 12:07:58.948639  165698 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0617 12:07:58.948740  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0617 12:07:58.948750  165698 kubeadm.go:309] 
	I0617 12:07:58.949270  165698 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:07:58.949403  165698 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0617 12:07:58.949508  165698 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0617 12:07:58.949630  165698 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0617 12:07:58.949694  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:07:59.418622  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:07:59.435367  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:07:59.449365  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:07:59.449384  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:07:59.449430  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:07:59.461411  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:07:59.461478  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:07:59.471262  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:07:59.480591  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:07:59.480640  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:07:59.490152  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:07:59.499248  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:07:59.499300  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:07:59.508891  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:07:59.518114  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:07:59.518152  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:07:59.528190  165698 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:07:59.592831  165698 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0617 12:07:59.592949  165698 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:07:59.752802  165698 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:07:59.752947  165698 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:07:59.753079  165698 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:07:59.984221  165698 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:07:59.986165  165698 out.go:204]   - Generating certificates and keys ...
	I0617 12:07:59.986270  165698 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:07:59.986391  165698 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:07:59.986522  165698 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:07:59.986606  165698 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:07:59.986717  165698 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:07:59.986795  165698 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:07:59.986887  165698 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:07:59.986972  165698 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:07:59.987081  165698 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:07:59.987191  165698 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:07:59.987250  165698 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:07:59.987331  165698 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:08:00.155668  165698 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:08:00.303780  165698 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:08:00.369907  165698 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:08:00.506550  165698 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:08:00.529943  165698 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:08:00.531684  165698 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:08:00.531756  165698 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:08:00.667972  165698 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:08:00.671036  165698 out.go:204]   - Booting up control plane ...
	I0617 12:08:00.671171  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:08:00.677241  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:08:00.678999  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:08:00.681119  165698 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:08:00.684535  165698 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 12:08:40.686610  165698 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0617 12:08:40.686950  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:40.687194  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:08:45.687594  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:45.687820  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:08:55.688285  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:55.688516  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:15.689306  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:09:15.689556  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:55.688872  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:09:55.689162  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:55.689206  165698 kubeadm.go:309] 
	I0617 12:09:55.689284  165698 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0617 12:09:55.689342  165698 kubeadm.go:309] 		timed out waiting for the condition
	I0617 12:09:55.689354  165698 kubeadm.go:309] 
	I0617 12:09:55.689418  165698 kubeadm.go:309] 	This error is likely caused by:
	I0617 12:09:55.689480  165698 kubeadm.go:309] 		- The kubelet is not running
	I0617 12:09:55.689632  165698 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0617 12:09:55.689657  165698 kubeadm.go:309] 
	I0617 12:09:55.689791  165698 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0617 12:09:55.689844  165698 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0617 12:09:55.689916  165698 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0617 12:09:55.689926  165698 kubeadm.go:309] 
	I0617 12:09:55.690059  165698 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0617 12:09:55.690140  165698 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0617 12:09:55.690159  165698 kubeadm.go:309] 
	I0617 12:09:55.690258  165698 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0617 12:09:55.690343  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0617 12:09:55.690434  165698 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0617 12:09:55.690530  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0617 12:09:55.690546  165698 kubeadm.go:309] 
	I0617 12:09:55.691495  165698 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:09:55.691595  165698 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0617 12:09:55.691708  165698 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0617 12:09:55.691787  165698 kubeadm.go:393] duration metric: took 7m57.151326537s to StartCluster
	I0617 12:09:55.691844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:09:55.691904  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:09:55.746514  165698 cri.go:89] found id: ""
	I0617 12:09:55.746550  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.746563  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:09:55.746572  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:09:55.746636  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:09:55.789045  165698 cri.go:89] found id: ""
	I0617 12:09:55.789083  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.789095  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:09:55.789103  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:09:55.789169  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:09:55.829492  165698 cri.go:89] found id: ""
	I0617 12:09:55.829533  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.829542  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:09:55.829547  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:09:55.829614  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:09:55.865213  165698 cri.go:89] found id: ""
	I0617 12:09:55.865246  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.865262  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:09:55.865267  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:09:55.865318  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:09:55.904067  165698 cri.go:89] found id: ""
	I0617 12:09:55.904102  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.904113  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:09:55.904122  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:09:55.904187  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:09:55.938441  165698 cri.go:89] found id: ""
	I0617 12:09:55.938471  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.938478  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:09:55.938487  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:09:55.938538  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:09:55.975669  165698 cri.go:89] found id: ""
	I0617 12:09:55.975710  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.975723  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:09:55.975731  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:09:55.975804  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:09:56.015794  165698 cri.go:89] found id: ""
	I0617 12:09:56.015826  165698 logs.go:276] 0 containers: []
	W0617 12:09:56.015837  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:09:56.015851  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:09:56.015868  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:09:56.095533  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:09:56.095557  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:09:56.095573  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:09:56.220817  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:09:56.220857  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:09:56.261470  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:09:56.261507  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:09:56.325626  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:09:56.325673  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0617 12:09:56.345438  165698 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0617 12:09:56.345491  165698 out.go:239] * 
	* 
	W0617 12:09:56.345606  165698 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0617 12:09:56.345635  165698 out.go:239] * 
	* 
	W0617 12:09:56.346583  165698 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 12:09:56.349928  165698 out.go:177] 
	W0617 12:09:56.351067  165698 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0617 12:09:56.351127  165698 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0617 12:09:56.351157  165698 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0617 12:09:56.352487  165698 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-003661 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003661 -n old-k8s-version-003661
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003661 -n old-k8s-version-003661: exit status 2 (244.656308ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-003661 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-003661 logs -n 25: (1.552210168s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-514753                              | cert-expiration-514753       | jenkins | v1.33.1 | 17 Jun 24 11:52 UTC | 17 Jun 24 11:52 UTC |
	| start   | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:52 UTC | 17 Jun 24 11:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-152830             | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-152830                                   | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-136195            | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:55 UTC |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	| delete  | -p                                                     | disable-driver-mounts-960277 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | disable-driver-mounts-960277                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:56 UTC |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-152830                  | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-152830                                   | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC | 17 Jun 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-136195                 | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-003661        | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC | 17 Jun 24 12:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-991309  | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:57 UTC | 17 Jun 24 11:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:57 UTC |                     |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-003661                              | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC | 17 Jun 24 11:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-003661             | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC | 17 Jun 24 11:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-003661                              | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-991309       | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:59 UTC | 17 Jun 24 12:06 UTC |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 11:59:37
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 11:59:37.428028  166103 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:59:37.428266  166103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:59:37.428274  166103 out.go:304] Setting ErrFile to fd 2...
	I0617 11:59:37.428279  166103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:59:37.428472  166103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:59:37.429026  166103 out.go:298] Setting JSON to false
	I0617 11:59:37.429968  166103 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6124,"bootTime":1718619453,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:59:37.430026  166103 start.go:139] virtualization: kvm guest
	I0617 11:59:37.432171  166103 out.go:177] * [default-k8s-diff-port-991309] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:59:37.433521  166103 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:59:37.433548  166103 notify.go:220] Checking for updates...
	I0617 11:59:37.434850  166103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:59:37.436099  166103 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:59:37.437362  166103 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:59:37.438535  166103 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:59:37.439644  166103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:59:37.441113  166103 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:59:37.441563  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:59:37.441645  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:59:37.456875  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45565
	I0617 11:59:37.457306  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:59:37.457839  166103 main.go:141] libmachine: Using API Version  1
	I0617 11:59:37.457861  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:59:37.458188  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:59:37.458381  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 11:59:37.458626  166103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:59:37.458927  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:59:37.458971  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:59:37.474024  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45165
	I0617 11:59:37.474411  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:59:37.474873  166103 main.go:141] libmachine: Using API Version  1
	I0617 11:59:37.474899  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:59:37.475199  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:59:37.475383  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 11:59:37.507955  166103 out.go:177] * Using the kvm2 driver based on existing profile
	I0617 11:59:37.509134  166103 start.go:297] selected driver: kvm2
	I0617 11:59:37.509148  166103 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:59:37.509249  166103 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:59:37.509927  166103 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:59:37.510004  166103 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 11:59:37.525340  166103 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 11:59:37.525701  166103 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:59:37.525761  166103 cni.go:84] Creating CNI manager for ""
	I0617 11:59:37.525779  166103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:59:37.525812  166103 start.go:340] cluster config:
	{Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:59:37.525910  166103 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:59:37.527756  166103 out.go:177] * Starting "default-k8s-diff-port-991309" primary control-plane node in "default-k8s-diff-port-991309" cluster
	I0617 11:59:36.391800  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:37.529104  166103 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:59:37.529159  166103 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 11:59:37.529171  166103 cache.go:56] Caching tarball of preloaded images
	I0617 11:59:37.529246  166103 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:59:37.529256  166103 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 11:59:37.529368  166103 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/config.json ...
	I0617 11:59:37.529565  166103 start.go:360] acquireMachinesLock for default-k8s-diff-port-991309: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:59:42.471684  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:45.543735  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:51.623725  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:54.695811  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:00.775775  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:03.847736  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:09.927768  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:12.999728  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:19.079809  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:22.151737  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:28.231763  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:31.303775  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:37.383783  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:40.455809  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:46.535757  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:49.607769  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:55.687772  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:58.759722  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:01:04.839736  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:01:07.911780  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:01:10.916735  165060 start.go:364] duration metric: took 4m27.471308215s to acquireMachinesLock for "embed-certs-136195"
	I0617 12:01:10.916814  165060 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:10.916827  165060 fix.go:54] fixHost starting: 
	I0617 12:01:10.917166  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:10.917203  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:10.932217  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43235
	I0617 12:01:10.932742  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:10.933241  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:10.933261  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:10.933561  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:10.933766  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:10.933939  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:10.935452  165060 fix.go:112] recreateIfNeeded on embed-certs-136195: state=Stopped err=<nil>
	I0617 12:01:10.935660  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	W0617 12:01:10.935831  165060 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:10.937510  165060 out.go:177] * Restarting existing kvm2 VM for "embed-certs-136195" ...
	I0617 12:01:10.938708  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Start
	I0617 12:01:10.938873  165060 main.go:141] libmachine: (embed-certs-136195) Ensuring networks are active...
	I0617 12:01:10.939602  165060 main.go:141] libmachine: (embed-certs-136195) Ensuring network default is active
	I0617 12:01:10.939896  165060 main.go:141] libmachine: (embed-certs-136195) Ensuring network mk-embed-certs-136195 is active
	I0617 12:01:10.940260  165060 main.go:141] libmachine: (embed-certs-136195) Getting domain xml...
	I0617 12:01:10.940881  165060 main.go:141] libmachine: (embed-certs-136195) Creating domain...
	I0617 12:01:12.136267  165060 main.go:141] libmachine: (embed-certs-136195) Waiting to get IP...
	I0617 12:01:12.137303  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:12.137692  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:12.137777  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:12.137684  166451 retry.go:31] will retry after 261.567272ms: waiting for machine to come up
	I0617 12:01:12.401390  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:12.401845  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:12.401873  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:12.401816  166451 retry.go:31] will retry after 332.256849ms: waiting for machine to come up
	I0617 12:01:12.735421  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:12.735842  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:12.735872  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:12.735783  166451 retry.go:31] will retry after 457.313241ms: waiting for machine to come up
	I0617 12:01:13.194621  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:13.195073  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:13.195091  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:13.195036  166451 retry.go:31] will retry after 539.191177ms: waiting for machine to come up
	I0617 12:01:10.914315  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:10.914353  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:01:10.914690  164809 buildroot.go:166] provisioning hostname "no-preload-152830"
	I0617 12:01:10.914716  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:01:10.914905  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:01:10.916557  164809 machine.go:97] duration metric: took 4m37.418351206s to provisionDockerMachine
	I0617 12:01:10.916625  164809 fix.go:56] duration metric: took 4m37.438694299s for fixHost
	I0617 12:01:10.916634  164809 start.go:83] releasing machines lock for "no-preload-152830", held for 4m37.438726092s
	W0617 12:01:10.916653  164809 start.go:713] error starting host: provision: host is not running
	W0617 12:01:10.916750  164809 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0617 12:01:10.916763  164809 start.go:728] Will try again in 5 seconds ...
	I0617 12:01:13.735708  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:13.736155  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:13.736184  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:13.736096  166451 retry.go:31] will retry after 754.965394ms: waiting for machine to come up
	I0617 12:01:14.493211  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:14.493598  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:14.493628  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:14.493544  166451 retry.go:31] will retry after 786.125188ms: waiting for machine to come up
	I0617 12:01:15.281505  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:15.281975  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:15.282008  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:15.281939  166451 retry.go:31] will retry after 1.091514617s: waiting for machine to come up
	I0617 12:01:16.375391  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:16.375904  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:16.375935  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:16.375820  166451 retry.go:31] will retry after 1.34601641s: waiting for machine to come up
	I0617 12:01:17.724108  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:17.724453  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:17.724477  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:17.724418  166451 retry.go:31] will retry after 1.337616605s: waiting for machine to come up
	I0617 12:01:15.918256  164809 start.go:360] acquireMachinesLock for no-preload-152830: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 12:01:19.063677  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:19.064210  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:19.064243  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:19.064144  166451 retry.go:31] will retry after 1.914267639s: waiting for machine to come up
	I0617 12:01:20.979644  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:20.980124  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:20.980150  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:20.980072  166451 retry.go:31] will retry after 2.343856865s: waiting for machine to come up
	I0617 12:01:23.326506  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:23.326878  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:23.326922  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:23.326861  166451 retry.go:31] will retry after 2.450231017s: waiting for machine to come up
	I0617 12:01:25.780501  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:25.780886  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:25.780913  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:25.780825  166451 retry.go:31] will retry after 3.591107926s: waiting for machine to come up
	I0617 12:01:30.728529  165698 start.go:364] duration metric: took 3m12.647041864s to acquireMachinesLock for "old-k8s-version-003661"
	I0617 12:01:30.728602  165698 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:30.728613  165698 fix.go:54] fixHost starting: 
	I0617 12:01:30.729036  165698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:30.729090  165698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:30.746528  165698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0617 12:01:30.746982  165698 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:30.747493  165698 main.go:141] libmachine: Using API Version  1
	I0617 12:01:30.747516  165698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:30.747847  165698 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:30.748060  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:30.748186  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetState
	I0617 12:01:30.750035  165698 fix.go:112] recreateIfNeeded on old-k8s-version-003661: state=Stopped err=<nil>
	I0617 12:01:30.750072  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	W0617 12:01:30.750206  165698 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:30.752196  165698 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-003661" ...
	I0617 12:01:29.375875  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.376372  165060 main.go:141] libmachine: (embed-certs-136195) Found IP for machine: 192.168.72.199
	I0617 12:01:29.376407  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has current primary IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.376430  165060 main.go:141] libmachine: (embed-certs-136195) Reserving static IP address...
	I0617 12:01:29.376754  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "embed-certs-136195", mac: "52:54:00:f2:27:84", ip: "192.168.72.199"} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.376788  165060 main.go:141] libmachine: (embed-certs-136195) Reserved static IP address: 192.168.72.199
	I0617 12:01:29.376800  165060 main.go:141] libmachine: (embed-certs-136195) DBG | skip adding static IP to network mk-embed-certs-136195 - found existing host DHCP lease matching {name: "embed-certs-136195", mac: "52:54:00:f2:27:84", ip: "192.168.72.199"}
	I0617 12:01:29.376811  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Getting to WaitForSSH function...
	I0617 12:01:29.376820  165060 main.go:141] libmachine: (embed-certs-136195) Waiting for SSH to be available...
	I0617 12:01:29.378811  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.379121  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.379151  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.379289  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Using SSH client type: external
	I0617 12:01:29.379321  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa (-rw-------)
	I0617 12:01:29.379354  165060 main.go:141] libmachine: (embed-certs-136195) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.199 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:01:29.379368  165060 main.go:141] libmachine: (embed-certs-136195) DBG | About to run SSH command:
	I0617 12:01:29.379381  165060 main.go:141] libmachine: (embed-certs-136195) DBG | exit 0
	I0617 12:01:29.503819  165060 main.go:141] libmachine: (embed-certs-136195) DBG | SSH cmd err, output: <nil>: 
	I0617 12:01:29.504207  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetConfigRaw
	I0617 12:01:29.504827  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:29.507277  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.507601  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.507635  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.507878  165060 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/config.json ...
	I0617 12:01:29.508102  165060 machine.go:94] provisionDockerMachine start ...
	I0617 12:01:29.508125  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:29.508333  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.510390  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.510636  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.510656  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.510761  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:29.510924  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.511082  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.511242  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:29.511404  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:29.511665  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:29.511680  165060 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:01:29.611728  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:01:29.611759  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetMachineName
	I0617 12:01:29.611996  165060 buildroot.go:166] provisioning hostname "embed-certs-136195"
	I0617 12:01:29.612025  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetMachineName
	I0617 12:01:29.612194  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.614719  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.615085  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.615110  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.615251  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:29.615425  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.615565  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.615685  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:29.615881  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:29.616066  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:29.616084  165060 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-136195 && echo "embed-certs-136195" | sudo tee /etc/hostname
	I0617 12:01:29.729321  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-136195
	
	I0617 12:01:29.729347  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.731968  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.732314  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.732352  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.732582  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:29.732820  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.733001  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.733157  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:29.733312  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:29.733471  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:29.733487  165060 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-136195' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-136195/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-136195' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:01:29.840083  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:29.840110  165060 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:01:29.840145  165060 buildroot.go:174] setting up certificates
	I0617 12:01:29.840180  165060 provision.go:84] configureAuth start
	I0617 12:01:29.840199  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetMachineName
	I0617 12:01:29.840488  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:29.843096  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.843446  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.843487  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.843687  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.845627  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.845914  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.845940  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.846021  165060 provision.go:143] copyHostCerts
	I0617 12:01:29.846096  165060 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:01:29.846106  165060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:01:29.846171  165060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:01:29.846267  165060 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:01:29.846275  165060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:01:29.846298  165060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:01:29.846359  165060 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:01:29.846366  165060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:01:29.846387  165060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:01:29.846456  165060 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.embed-certs-136195 san=[127.0.0.1 192.168.72.199 embed-certs-136195 localhost minikube]
	I0617 12:01:30.076596  165060 provision.go:177] copyRemoteCerts
	I0617 12:01:30.076657  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:01:30.076686  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.079269  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.079565  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.079588  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.079785  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.080016  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.080189  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.080316  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.161615  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:01:30.188790  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0617 12:01:30.215171  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:01:30.241310  165060 provision.go:87] duration metric: took 401.115469ms to configureAuth
	I0617 12:01:30.241332  165060 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:01:30.241529  165060 config.go:182] Loaded profile config "embed-certs-136195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:01:30.241602  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.244123  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.244427  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.244459  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.244584  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.244793  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.244999  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.245174  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.245340  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:30.245497  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:30.245512  165060 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:01:30.498156  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:01:30.498189  165060 machine.go:97] duration metric: took 990.071076ms to provisionDockerMachine
	I0617 12:01:30.498201  165060 start.go:293] postStartSetup for "embed-certs-136195" (driver="kvm2")
	I0617 12:01:30.498214  165060 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:01:30.498238  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.498580  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:01:30.498605  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.501527  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.501912  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.501941  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.502054  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.502257  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.502423  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.502578  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.583151  165060 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:01:30.587698  165060 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:01:30.587722  165060 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:01:30.587819  165060 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:01:30.587940  165060 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:01:30.588078  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:01:30.598234  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:30.622580  165060 start.go:296] duration metric: took 124.363651ms for postStartSetup
	I0617 12:01:30.622621  165060 fix.go:56] duration metric: took 19.705796191s for fixHost
	I0617 12:01:30.622645  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.625226  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.625637  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.625684  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.625821  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.626040  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.626229  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.626418  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.626613  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:30.626839  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:30.626862  165060 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:01:30.728365  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625690.704643527
	
	I0617 12:01:30.728389  165060 fix.go:216] guest clock: 1718625690.704643527
	I0617 12:01:30.728396  165060 fix.go:229] Guest: 2024-06-17 12:01:30.704643527 +0000 UTC Remote: 2024-06-17 12:01:30.622625631 +0000 UTC m=+287.310804086 (delta=82.017896ms)
	I0617 12:01:30.728416  165060 fix.go:200] guest clock delta is within tolerance: 82.017896ms
	I0617 12:01:30.728421  165060 start.go:83] releasing machines lock for "embed-certs-136195", held for 19.811634749s
	I0617 12:01:30.728445  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.728763  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:30.731414  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.731784  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.731816  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.731937  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.732504  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.732704  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.732761  165060 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:01:30.732826  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.732964  165060 ssh_runner.go:195] Run: cat /version.json
	I0617 12:01:30.732991  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.735854  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736049  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736278  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.736310  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.736334  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736397  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736579  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.736653  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.736777  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.736959  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.736972  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.737131  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.737188  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.737356  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.844295  165060 ssh_runner.go:195] Run: systemctl --version
	I0617 12:01:30.851958  165060 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:01:31.000226  165060 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:01:31.008322  165060 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:01:31.008397  165060 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:01:31.029520  165060 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:01:31.029547  165060 start.go:494] detecting cgroup driver to use...
	I0617 12:01:31.029617  165060 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:01:31.045505  165060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:01:31.059851  165060 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:01:31.059920  165060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:01:31.075011  165060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:01:31.089705  165060 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:01:31.204300  165060 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:01:31.342204  165060 docker.go:233] disabling docker service ...
	I0617 12:01:31.342290  165060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:01:31.356945  165060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:01:31.369786  165060 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:01:31.505817  165060 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:01:31.631347  165060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:01:31.646048  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:01:31.664854  165060 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:01:31.664923  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.677595  165060 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:01:31.677678  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.690164  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.701482  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.712488  165060 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:01:31.723994  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.736805  165060 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.755001  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.767226  165060 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:01:31.777894  165060 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:01:31.777954  165060 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:01:31.792644  165060 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:01:31.803267  165060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:31.920107  165060 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:01:32.067833  165060 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:01:32.067904  165060 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:01:32.072818  165060 start.go:562] Will wait 60s for crictl version
	I0617 12:01:32.072881  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:01:32.076782  165060 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:01:32.116635  165060 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:01:32.116709  165060 ssh_runner.go:195] Run: crio --version
	I0617 12:01:32.148094  165060 ssh_runner.go:195] Run: crio --version
	I0617 12:01:32.176924  165060 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:01:30.753437  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .Start
	I0617 12:01:30.753608  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring networks are active...
	I0617 12:01:30.754272  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring network default is active
	I0617 12:01:30.754600  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring network mk-old-k8s-version-003661 is active
	I0617 12:01:30.754967  165698 main.go:141] libmachine: (old-k8s-version-003661) Getting domain xml...
	I0617 12:01:30.755739  165698 main.go:141] libmachine: (old-k8s-version-003661) Creating domain...
	I0617 12:01:32.029080  165698 main.go:141] libmachine: (old-k8s-version-003661) Waiting to get IP...
	I0617 12:01:32.029902  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.030401  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.030477  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.030384  166594 retry.go:31] will retry after 191.846663ms: waiting for machine to come up
	I0617 12:01:32.223912  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.224300  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.224328  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.224276  166594 retry.go:31] will retry after 341.806498ms: waiting for machine to come up
	I0617 12:01:32.568066  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.568648  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.568682  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.568575  166594 retry.go:31] will retry after 359.779948ms: waiting for machine to come up
	I0617 12:01:32.930210  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.930652  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.930675  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.930604  166594 retry.go:31] will retry after 548.549499ms: waiting for machine to come up
	I0617 12:01:32.178076  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:32.181127  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:32.181524  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:32.181553  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:32.181778  165060 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0617 12:01:32.186998  165060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:32.203033  165060 kubeadm.go:877] updating cluster {Name:embed-certs-136195 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-136195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:01:32.203142  165060 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:01:32.203183  165060 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:32.245712  165060 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:01:32.245796  165060 ssh_runner.go:195] Run: which lz4
	I0617 12:01:32.250113  165060 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:01:32.254486  165060 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:01:32.254511  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0617 12:01:33.480493  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:33.480965  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:33.481004  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:33.480931  166594 retry.go:31] will retry after 636.044066ms: waiting for machine to come up
	I0617 12:01:34.118880  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:34.119361  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:34.119394  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:34.119299  166594 retry.go:31] will retry after 637.085777ms: waiting for machine to come up
	I0617 12:01:34.757614  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:34.758097  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:34.758126  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:34.758051  166594 retry.go:31] will retry after 921.652093ms: waiting for machine to come up
	I0617 12:01:35.681846  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:35.682324  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:35.682351  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:35.682269  166594 retry.go:31] will retry after 1.1106801s: waiting for machine to come up
	I0617 12:01:36.794411  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:36.794845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:36.794869  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:36.794793  166594 retry.go:31] will retry after 1.323395845s: waiting for machine to come up
	I0617 12:01:33.776867  165060 crio.go:462] duration metric: took 1.526763522s to copy over tarball
	I0617 12:01:33.776955  165060 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:01:35.994216  165060 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.217222149s)
	I0617 12:01:35.994246  165060 crio.go:469] duration metric: took 2.217348025s to extract the tarball
	I0617 12:01:35.994255  165060 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:01:36.034978  165060 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:36.087255  165060 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 12:01:36.087281  165060 cache_images.go:84] Images are preloaded, skipping loading
	I0617 12:01:36.087291  165060 kubeadm.go:928] updating node { 192.168.72.199 8443 v1.30.1 crio true true} ...
	I0617 12:01:36.087447  165060 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-136195 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-136195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:01:36.087551  165060 ssh_runner.go:195] Run: crio config
	I0617 12:01:36.130409  165060 cni.go:84] Creating CNI manager for ""
	I0617 12:01:36.130433  165060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:36.130449  165060 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:01:36.130479  165060 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.199 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-136195 NodeName:embed-certs-136195 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:01:36.130633  165060 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-136195"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.199
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.199"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:01:36.130724  165060 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:01:36.141027  165060 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:01:36.141110  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:01:36.150748  165060 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0617 12:01:36.167282  165060 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:01:36.183594  165060 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0617 12:01:36.202494  165060 ssh_runner.go:195] Run: grep 192.168.72.199	control-plane.minikube.internal$ /etc/hosts
	I0617 12:01:36.206515  165060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:36.218598  165060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:36.344280  165060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:36.361127  165060 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195 for IP: 192.168.72.199
	I0617 12:01:36.361152  165060 certs.go:194] generating shared ca certs ...
	I0617 12:01:36.361172  165060 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:36.361370  165060 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:01:36.361425  165060 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:01:36.361438  165060 certs.go:256] generating profile certs ...
	I0617 12:01:36.361557  165060 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/client.key
	I0617 12:01:36.361648  165060 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/apiserver.key.f7068429
	I0617 12:01:36.361696  165060 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/proxy-client.key
	I0617 12:01:36.361863  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:01:36.361913  165060 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:01:36.361925  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:01:36.361951  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:01:36.361984  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:01:36.362005  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:01:36.362041  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:36.362770  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:01:36.397257  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:01:36.422523  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:01:36.451342  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:01:36.485234  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0617 12:01:36.514351  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:01:36.544125  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:01:36.567574  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:01:36.590417  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:01:36.613174  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:01:36.636187  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:01:36.659365  165060 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:01:36.675981  165060 ssh_runner.go:195] Run: openssl version
	I0617 12:01:36.681694  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:01:36.692324  165060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:01:36.696871  165060 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:01:36.696938  165060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:01:36.702794  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:01:36.713372  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:01:36.724054  165060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:01:36.728505  165060 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:01:36.728566  165060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:01:36.734082  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:01:36.744542  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:01:36.755445  165060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:36.759880  165060 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:36.759922  165060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:36.765367  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:01:36.776234  165060 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:01:36.780822  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:01:36.786895  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:01:36.793358  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:01:36.800187  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:01:36.806591  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:01:36.812681  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:01:36.818814  165060 kubeadm.go:391] StartCluster: {Name:embed-certs-136195 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-136195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:01:36.818903  165060 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:01:36.818945  165060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:36.861839  165060 cri.go:89] found id: ""
	I0617 12:01:36.861920  165060 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:01:36.873500  165060 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:01:36.873529  165060 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:01:36.873551  165060 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:01:36.873602  165060 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:01:36.884767  165060 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:01:36.886013  165060 kubeconfig.go:125] found "embed-certs-136195" server: "https://192.168.72.199:8443"
	I0617 12:01:36.888144  165060 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:01:36.899204  165060 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.199
	I0617 12:01:36.899248  165060 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:01:36.899263  165060 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:01:36.899325  165060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:36.941699  165060 cri.go:89] found id: ""
	I0617 12:01:36.941782  165060 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:01:36.960397  165060 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:01:36.971254  165060 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:01:36.971276  165060 kubeadm.go:156] found existing configuration files:
	
	I0617 12:01:36.971333  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:01:36.981367  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:01:36.981448  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:01:36.991878  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:01:37.001741  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:01:37.001816  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:01:37.012170  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:01:37.021914  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:01:37.021979  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:01:37.031866  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:01:37.041657  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:01:37.041706  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:01:37.051440  165060 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:01:37.062543  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:37.175190  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:37.872053  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:38.085732  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:38.146895  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:38.208633  165060 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:01:38.208898  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:38.119805  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:38.297858  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:38.297905  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:38.120293  166594 retry.go:31] will retry after 1.769592858s: waiting for machine to come up
	I0617 12:01:39.892495  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:39.893035  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:39.893065  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:39.892948  166594 retry.go:31] will retry after 1.954570801s: waiting for machine to come up
	I0617 12:01:41.849587  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:41.850111  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:41.850140  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:41.850067  166594 retry.go:31] will retry after 3.44879626s: waiting for machine to come up
	I0617 12:01:38.708936  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:39.209014  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:39.709765  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:39.728309  165060 api_server.go:72] duration metric: took 1.519672652s to wait for apiserver process to appear ...
	I0617 12:01:39.728342  165060 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:01:39.728369  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:42.756054  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:01:42.756089  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:01:42.756105  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:42.797646  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:01:42.797689  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:01:43.229201  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:43.233440  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:01:43.233467  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:01:43.728490  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:43.741000  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:01:43.741037  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:01:44.228634  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:44.232839  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 200:
	ok
	I0617 12:01:44.238582  165060 api_server.go:141] control plane version: v1.30.1
	I0617 12:01:44.238606  165060 api_server.go:131] duration metric: took 4.510256755s to wait for apiserver health ...
	I0617 12:01:44.238615  165060 cni.go:84] Creating CNI manager for ""
	I0617 12:01:44.238622  165060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:44.240569  165060 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:01:44.241963  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:01:44.253143  165060 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:01:44.286772  165060 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:01:44.295697  165060 system_pods.go:59] 8 kube-system pods found
	I0617 12:01:44.295736  165060 system_pods.go:61] "coredns-7db6d8ff4d-9bbjg" [1ba0eee5-436e-4c83-b5ce-3c907d66b641] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0617 12:01:44.295744  165060 system_pods.go:61] "etcd-embed-certs-136195" [6dc81a80-c56b-4517-af82-c450cf9578f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0617 12:01:44.295757  165060 system_pods.go:61] "kube-apiserver-embed-certs-136195" [bd61a715-2471-4dca-aa48-a157531ebd6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 12:01:44.295763  165060 system_pods.go:61] "kube-controller-manager-embed-certs-136195" [194db4b0-75c2-4905-8e4d-813185497b51] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 12:01:44.295768  165060 system_pods.go:61] "kube-proxy-25d5n" [52b6d09a-899f-40c4-b1f3-7842ae755165] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0617 12:01:44.295774  165060 system_pods.go:61] "kube-scheduler-embed-certs-136195" [b04d3798-f465-4f82-9ec7-777ea62d5b94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 12:01:44.295782  165060 system_pods.go:61] "metrics-server-569cc877fc-dmhfs" [31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:01:44.295788  165060 system_pods.go:61] "storage-provisioner" [4b04a38a-5006-4496-a24d-0940029193de] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 12:01:44.295797  165060 system_pods.go:74] duration metric: took 9.004741ms to wait for pod list to return data ...
	I0617 12:01:44.295811  165060 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:01:44.298934  165060 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:01:44.298968  165060 node_conditions.go:123] node cpu capacity is 2
	I0617 12:01:44.298989  165060 node_conditions.go:105] duration metric: took 3.172465ms to run NodePressure ...
	I0617 12:01:44.299027  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:44.565943  165060 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 12:01:44.570796  165060 kubeadm.go:733] kubelet initialised
	I0617 12:01:44.570825  165060 kubeadm.go:734] duration metric: took 4.851024ms waiting for restarted kubelet to initialise ...
	I0617 12:01:44.570836  165060 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:01:44.575565  165060 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.582180  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.582209  165060 pod_ready.go:81] duration metric: took 6.620747ms for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.582221  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.582231  165060 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.586828  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "etcd-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.586850  165060 pod_ready.go:81] duration metric: took 4.61059ms for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.586859  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "etcd-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.586866  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.591162  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.591189  165060 pod_ready.go:81] duration metric: took 4.316651ms for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.591197  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.591204  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.690269  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.690301  165060 pod_ready.go:81] duration metric: took 99.088803ms for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.690310  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.690317  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:45.089616  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-proxy-25d5n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.089640  165060 pod_ready.go:81] duration metric: took 399.31511ms for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:45.089649  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-proxy-25d5n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.089656  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:45.491031  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.491058  165060 pod_ready.go:81] duration metric: took 401.395966ms for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:45.491068  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.491074  165060 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:45.890606  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.890633  165060 pod_ready.go:81] duration metric: took 399.550946ms for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:45.890644  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.890650  165060 pod_ready.go:38] duration metric: took 1.319802914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:01:45.890669  165060 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:01:45.903900  165060 ops.go:34] apiserver oom_adj: -16
	I0617 12:01:45.903936  165060 kubeadm.go:591] duration metric: took 9.03037731s to restartPrimaryControlPlane
	I0617 12:01:45.903950  165060 kubeadm.go:393] duration metric: took 9.085142288s to StartCluster
	I0617 12:01:45.903974  165060 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:45.904063  165060 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:01:45.905636  165060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:45.905908  165060 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:01:45.907817  165060 out.go:177] * Verifying Kubernetes components...
	I0617 12:01:45.905981  165060 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:01:45.907852  165060 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-136195"
	I0617 12:01:45.907880  165060 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-136195"
	W0617 12:01:45.907890  165060 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:01:45.907903  165060 addons.go:69] Setting default-storageclass=true in profile "embed-certs-136195"
	I0617 12:01:45.906085  165060 config.go:182] Loaded profile config "embed-certs-136195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:01:45.909296  165060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:45.907923  165060 host.go:66] Checking if "embed-certs-136195" exists ...
	I0617 12:01:45.907924  165060 addons.go:69] Setting metrics-server=true in profile "embed-certs-136195"
	I0617 12:01:45.909472  165060 addons.go:234] Setting addon metrics-server=true in "embed-certs-136195"
	W0617 12:01:45.909481  165060 addons.go:243] addon metrics-server should already be in state true
	I0617 12:01:45.909506  165060 host.go:66] Checking if "embed-certs-136195" exists ...
	I0617 12:01:45.907954  165060 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-136195"
	I0617 12:01:45.909776  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.909822  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.909836  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.909861  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.909841  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.909928  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.925250  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36545
	I0617 12:01:45.925500  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38767
	I0617 12:01:45.925708  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.925929  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.926262  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.926282  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.926420  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.926445  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.926637  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.926728  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.927142  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.927171  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.927206  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.927236  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.929198  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
	I0617 12:01:45.929658  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.930137  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.930159  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.930465  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.930661  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.934085  165060 addons.go:234] Setting addon default-storageclass=true in "embed-certs-136195"
	W0617 12:01:45.934107  165060 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:01:45.934139  165060 host.go:66] Checking if "embed-certs-136195" exists ...
	I0617 12:01:45.934534  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.934579  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.944472  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I0617 12:01:45.945034  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.945712  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.945741  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.946105  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.946343  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.946673  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43225
	I0617 12:01:45.947007  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.947706  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.947725  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.948027  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.948228  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.948359  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:45.950451  165060 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:01:45.951705  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:01:45.951719  165060 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:01:45.951735  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:45.949626  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:45.951588  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43695
	I0617 12:01:45.953222  165060 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:45.954471  165060 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:01:45.952290  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.954494  165060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:01:45.954514  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:45.955079  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.955098  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.955123  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.955478  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.955718  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:45.955757  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.955924  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:45.956099  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.956106  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:45.956147  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.956374  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:45.956507  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:45.957756  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.958184  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:45.958206  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.958335  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:45.958505  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:45.958680  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:45.958825  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:45.977247  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39751
	I0617 12:01:45.977663  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.978179  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.978203  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.978524  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.978711  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.980425  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:45.980601  165060 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:01:45.980616  165060 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:01:45.980630  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:45.983633  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.984088  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:45.984105  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.984258  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:45.984377  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:45.984505  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:45.984661  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:46.093292  165060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:46.112779  165060 node_ready.go:35] waiting up to 6m0s for node "embed-certs-136195" to be "Ready" ...
	I0617 12:01:46.182239  165060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:01:46.248534  165060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:01:46.286637  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:01:46.286662  165060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:01:46.313951  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:01:46.313981  165060 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:01:46.337155  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:01:46.337186  165060 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:01:46.389025  165060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:01:46.548086  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:46.548106  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:46.548442  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:46.548461  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:46.548471  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:46.548481  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:46.548485  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:46.548727  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:46.548744  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:46.548764  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:46.554199  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:46.554218  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:46.554454  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:46.554469  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:46.554480  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.142290  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.142321  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.142629  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.142658  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.142671  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.142676  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.142692  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.142943  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.142971  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.142985  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.216339  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.216366  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.216658  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.216679  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.216690  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.216700  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.216709  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.216931  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.216967  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.216982  165060 addons.go:475] Verifying addon metrics-server=true in "embed-certs-136195"
	I0617 12:01:47.219627  165060 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0617 12:01:45.300413  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:45.300848  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:45.300878  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:45.300794  166594 retry.go:31] will retry after 3.892148485s: waiting for machine to come up
	I0617 12:01:47.220905  165060 addons.go:510] duration metric: took 1.314925386s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0617 12:01:48.116197  165060 node_ready.go:53] node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:50.500448  166103 start.go:364] duration metric: took 2m12.970832528s to acquireMachinesLock for "default-k8s-diff-port-991309"
	I0617 12:01:50.500511  166103 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:50.500534  166103 fix.go:54] fixHost starting: 
	I0617 12:01:50.500980  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:50.501018  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:50.517593  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43641
	I0617 12:01:50.518035  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:50.518600  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:01:50.518635  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:50.519051  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:50.519296  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:01:50.519502  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:01:50.521095  166103 fix.go:112] recreateIfNeeded on default-k8s-diff-port-991309: state=Stopped err=<nil>
	I0617 12:01:50.521123  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	W0617 12:01:50.521307  166103 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:50.522795  166103 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-991309" ...
	I0617 12:01:49.197189  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.197671  165698 main.go:141] libmachine: (old-k8s-version-003661) Found IP for machine: 192.168.61.164
	I0617 12:01:49.197697  165698 main.go:141] libmachine: (old-k8s-version-003661) Reserving static IP address...
	I0617 12:01:49.197714  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has current primary IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.198147  165698 main.go:141] libmachine: (old-k8s-version-003661) Reserved static IP address: 192.168.61.164
	I0617 12:01:49.198175  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "old-k8s-version-003661", mac: "52:54:00:76:66:a0", ip: "192.168.61.164"} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.198185  165698 main.go:141] libmachine: (old-k8s-version-003661) Waiting for SSH to be available...
	I0617 12:01:49.198217  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | skip adding static IP to network mk-old-k8s-version-003661 - found existing host DHCP lease matching {name: "old-k8s-version-003661", mac: "52:54:00:76:66:a0", ip: "192.168.61.164"}
	I0617 12:01:49.198227  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Getting to WaitForSSH function...
	I0617 12:01:49.200478  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.200907  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.200935  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.201088  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Using SSH client type: external
	I0617 12:01:49.201116  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa (-rw-------)
	I0617 12:01:49.201154  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:01:49.201169  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | About to run SSH command:
	I0617 12:01:49.201183  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | exit 0
	I0617 12:01:49.323763  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | SSH cmd err, output: <nil>: 
	I0617 12:01:49.324127  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetConfigRaw
	I0617 12:01:49.324835  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:49.327217  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.327628  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.327660  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.327891  165698 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/config.json ...
	I0617 12:01:49.328097  165698 machine.go:94] provisionDockerMachine start ...
	I0617 12:01:49.328120  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:49.328365  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.330587  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.330992  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.331033  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.331160  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.331324  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.331490  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.331637  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.331824  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.332037  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.332049  165698 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:01:49.432170  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:01:49.432201  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.432498  165698 buildroot.go:166] provisioning hostname "old-k8s-version-003661"
	I0617 12:01:49.432524  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.432730  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.435845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.436276  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.436317  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.436507  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.436708  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.436909  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.437074  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.437289  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.437496  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.437510  165698 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-003661 && echo "old-k8s-version-003661" | sudo tee /etc/hostname
	I0617 12:01:49.550158  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-003661
	
	I0617 12:01:49.550187  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.553141  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.553509  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.553539  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.553737  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.553943  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.554141  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.554298  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.554520  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.554759  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.554787  165698 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-003661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-003661/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-003661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:01:49.661049  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:49.661079  165698 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:01:49.661106  165698 buildroot.go:174] setting up certificates
	I0617 12:01:49.661115  165698 provision.go:84] configureAuth start
	I0617 12:01:49.661124  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.661452  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:49.664166  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.664561  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.664591  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.664723  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.666845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.667114  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.667158  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.667287  165698 provision.go:143] copyHostCerts
	I0617 12:01:49.667377  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:01:49.667387  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:01:49.667440  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:01:49.667561  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:01:49.667571  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:01:49.667594  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:01:49.667649  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:01:49.667656  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:01:49.667674  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:01:49.667722  165698 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-003661 san=[127.0.0.1 192.168.61.164 localhost minikube old-k8s-version-003661]
	I0617 12:01:49.853671  165698 provision.go:177] copyRemoteCerts
	I0617 12:01:49.853736  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:01:49.853767  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.856171  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.856540  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.856577  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.856737  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.857071  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.857220  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.857360  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:49.938626  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:01:49.964401  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0617 12:01:49.988397  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0617 12:01:50.013356  165698 provision.go:87] duration metric: took 352.227211ms to configureAuth
	I0617 12:01:50.013382  165698 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:01:50.013581  165698 config.go:182] Loaded profile config "old-k8s-version-003661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0617 12:01:50.013689  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.016168  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.016514  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.016548  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.016657  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.016847  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.017025  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.017152  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.017300  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:50.017483  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:50.017505  165698 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:01:50.280037  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:01:50.280065  165698 machine.go:97] duration metric: took 951.954687ms to provisionDockerMachine
	I0617 12:01:50.280076  165698 start.go:293] postStartSetup for "old-k8s-version-003661" (driver="kvm2")
	I0617 12:01:50.280086  165698 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:01:50.280102  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.280467  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:01:50.280506  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.283318  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.283657  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.283684  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.283874  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.284106  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.284279  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.284402  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.362452  165698 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:01:50.366699  165698 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:01:50.366726  165698 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:01:50.366788  165698 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:01:50.366878  165698 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:01:50.367004  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:01:50.376706  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:50.399521  165698 start.go:296] duration metric: took 119.43167ms for postStartSetup
	I0617 12:01:50.399558  165698 fix.go:56] duration metric: took 19.670946478s for fixHost
	I0617 12:01:50.399578  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.402079  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.402465  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.402500  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.402649  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.402835  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.402994  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.403138  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.403321  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:50.403529  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:50.403541  165698 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:01:50.500267  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625710.471154465
	
	I0617 12:01:50.500294  165698 fix.go:216] guest clock: 1718625710.471154465
	I0617 12:01:50.500304  165698 fix.go:229] Guest: 2024-06-17 12:01:50.471154465 +0000 UTC Remote: 2024-06-17 12:01:50.399561534 +0000 UTC m=+212.458541959 (delta=71.592931ms)
	I0617 12:01:50.500350  165698 fix.go:200] guest clock delta is within tolerance: 71.592931ms
	I0617 12:01:50.500355  165698 start.go:83] releasing machines lock for "old-k8s-version-003661", held for 19.771784344s
	I0617 12:01:50.500380  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.500648  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:50.503346  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.503749  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.503776  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.503974  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504536  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504676  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504750  165698 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:01:50.504801  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.504861  165698 ssh_runner.go:195] Run: cat /version.json
	I0617 12:01:50.504890  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.507577  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.507736  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508013  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.508041  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508176  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.508200  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508205  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.508335  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.508419  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.508499  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.508580  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.508691  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.508717  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.508830  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.585030  165698 ssh_runner.go:195] Run: systemctl --version
	I0617 12:01:50.612492  165698 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:01:50.765842  165698 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:01:50.773214  165698 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:01:50.773288  165698 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:01:50.793397  165698 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:01:50.793424  165698 start.go:494] detecting cgroup driver to use...
	I0617 12:01:50.793499  165698 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:01:50.811531  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:01:50.826223  165698 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:01:50.826289  165698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:01:50.840517  165698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:01:50.854788  165698 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:01:50.970328  165698 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:01:51.125815  165698 docker.go:233] disabling docker service ...
	I0617 12:01:51.125893  165698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:01:51.146368  165698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:01:51.161459  165698 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:01:51.346032  165698 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:01:51.503395  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:01:51.521021  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:01:51.543851  165698 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0617 12:01:51.543905  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.556230  165698 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:01:51.556309  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.573061  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.588663  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.601086  165698 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:01:51.617347  165698 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:01:51.634502  165698 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:01:51.634635  165698 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:01:51.652813  165698 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:01:51.665145  165698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:51.826713  165698 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:01:51.981094  165698 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:01:51.981186  165698 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:01:51.986026  165698 start.go:562] Will wait 60s for crictl version
	I0617 12:01:51.986091  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:51.990253  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:01:52.032543  165698 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:01:52.032631  165698 ssh_runner.go:195] Run: crio --version
	I0617 12:01:52.063904  165698 ssh_runner.go:195] Run: crio --version
	I0617 12:01:52.097158  165698 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0617 12:01:50.524130  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Start
	I0617 12:01:50.524321  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Ensuring networks are active...
	I0617 12:01:50.524939  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Ensuring network default is active
	I0617 12:01:50.525300  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Ensuring network mk-default-k8s-diff-port-991309 is active
	I0617 12:01:50.527342  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Getting domain xml...
	I0617 12:01:50.528126  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Creating domain...
	I0617 12:01:51.864887  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting to get IP...
	I0617 12:01:51.865835  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:51.866246  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:51.866328  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:51.866228  166802 retry.go:31] will retry after 200.163407ms: waiting for machine to come up
	I0617 12:01:52.067708  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.068164  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.068193  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:52.068119  166802 retry.go:31] will retry after 364.503903ms: waiting for machine to come up
	I0617 12:01:52.098675  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:52.102187  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:52.102572  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:52.102603  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:52.102823  165698 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0617 12:01:52.107573  165698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:52.121312  165698 kubeadm.go:877] updating cluster {Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.164 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:01:52.121448  165698 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0617 12:01:52.121515  165698 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:52.181796  165698 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0617 12:01:52.181891  165698 ssh_runner.go:195] Run: which lz4
	I0617 12:01:52.186827  165698 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:01:52.191806  165698 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:01:52.191875  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0617 12:01:50.116573  165060 node_ready.go:53] node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:52.122162  165060 node_ready.go:53] node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:53.117556  165060 node_ready.go:49] node "embed-certs-136195" has status "Ready":"True"
	I0617 12:01:53.117589  165060 node_ready.go:38] duration metric: took 7.004769746s for node "embed-certs-136195" to be "Ready" ...
	I0617 12:01:53.117598  165060 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:01:53.125606  165060 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:53.131618  165060 pod_ready.go:92] pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:53.131643  165060 pod_ready.go:81] duration metric: took 6.000929ms for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:53.131654  165060 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:52.434791  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.435584  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.435740  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:52.435665  166802 retry.go:31] will retry after 486.514518ms: waiting for machine to come up
	I0617 12:01:52.924190  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.924819  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.924845  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:52.924681  166802 retry.go:31] will retry after 520.971301ms: waiting for machine to come up
	I0617 12:01:53.447437  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:53.447965  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:53.447995  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:53.447919  166802 retry.go:31] will retry after 622.761044ms: waiting for machine to come up
	I0617 12:01:54.072700  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.073170  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.073202  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:54.073112  166802 retry.go:31] will retry after 671.940079ms: waiting for machine to come up
	I0617 12:01:54.746830  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.747342  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.747372  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:54.747310  166802 retry.go:31] will retry after 734.856022ms: waiting for machine to come up
	I0617 12:01:55.484571  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:55.485127  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:55.485157  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:55.485066  166802 retry.go:31] will retry after 1.198669701s: waiting for machine to come up
	I0617 12:01:56.685201  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:56.685468  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:56.685493  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:56.685440  166802 retry.go:31] will retry after 1.562509853s: waiting for machine to come up
	I0617 12:01:54.026903  165698 crio.go:462] duration metric: took 1.840117639s to copy over tarball
	I0617 12:01:54.027003  165698 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:01:57.049870  165698 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.022814584s)
	I0617 12:01:57.049904  165698 crio.go:469] duration metric: took 3.022967677s to extract the tarball
	I0617 12:01:57.049914  165698 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:01:57.094589  165698 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:57.133299  165698 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0617 12:01:57.133331  165698 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.133451  165698 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.133456  165698 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.133477  165698 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.133530  165698 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.133626  165698 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.135990  165698 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.135994  165698 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.135985  165698 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0617 12:01:57.136041  165698 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.136041  165698 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.289271  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.299061  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.322581  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.336462  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.337619  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.350335  165698 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0617 12:01:57.350395  165698 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.350448  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.357972  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0617 12:01:57.391517  165698 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0617 12:01:57.391563  165698 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.391640  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.419438  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.442111  165698 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0617 12:01:57.442154  165698 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.442200  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.450145  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.485873  165698 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0617 12:01:57.485922  165698 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0617 12:01:57.485942  165698 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.485957  165698 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.485996  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.486003  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.486053  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.490584  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.490669  165698 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0617 12:01:57.490714  165698 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0617 12:01:57.490755  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.551564  165698 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0617 12:01:57.551597  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.551619  165698 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.551662  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.660683  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0617 12:01:57.660732  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.660799  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0617 12:01:57.660856  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0617 12:01:57.660734  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.660903  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0617 12:01:57.660930  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.753965  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0617 12:01:57.753981  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0617 12:01:57.754069  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0617 12:01:57.754069  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0617 12:01:57.754146  165698 cache_images.go:92] duration metric: took 620.797178ms to LoadCachedImages
	W0617 12:01:57.754271  165698 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0617 12:01:57.754292  165698 kubeadm.go:928] updating node { 192.168.61.164 8443 v1.20.0 crio true true} ...
	I0617 12:01:57.754415  165698 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-003661 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:01:57.754489  165698 ssh_runner.go:195] Run: crio config
	I0617 12:01:57.807120  165698 cni.go:84] Creating CNI manager for ""
	I0617 12:01:57.807144  165698 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:57.807158  165698 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:01:57.807182  165698 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.164 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-003661 NodeName:old-k8s-version-003661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0617 12:01:57.807370  165698 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-003661"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:01:57.807437  165698 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0617 12:01:57.817865  165698 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:01:57.817940  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:01:57.829796  165698 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0617 12:01:57.847758  165698 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:01:57.866182  165698 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0617 12:01:57.884500  165698 ssh_runner.go:195] Run: grep 192.168.61.164	control-plane.minikube.internal$ /etc/hosts
	I0617 12:01:57.888852  165698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:57.902176  165698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:55.138418  165060 pod_ready.go:102] pod "etcd-embed-certs-136195" in "kube-system" namespace has status "Ready":"False"
	I0617 12:01:55.641014  165060 pod_ready.go:92] pod "etcd-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:55.641047  165060 pod_ready.go:81] duration metric: took 2.509383461s for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:55.641061  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.151759  165060 pod_ready.go:92] pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.151788  165060 pod_ready.go:81] duration metric: took 510.718192ms for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.152027  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.157234  165060 pod_ready.go:92] pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.157260  165060 pod_ready.go:81] duration metric: took 5.220069ms for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.157273  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.161767  165060 pod_ready.go:92] pod "kube-proxy-25d5n" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.161787  165060 pod_ready.go:81] duration metric: took 4.50732ms for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.161796  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.717763  165060 pod_ready.go:92] pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.717865  165060 pod_ready.go:81] duration metric: took 556.058292ms for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.717892  165060 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:58.249594  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:58.250033  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:58.250069  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:58.250019  166802 retry.go:31] will retry after 2.154567648s: waiting for machine to come up
	I0617 12:02:00.406269  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:00.406668  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:02:00.406702  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:02:00.406615  166802 retry.go:31] will retry after 2.065044206s: waiting for machine to come up
	I0617 12:01:58.049361  165698 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:58.067893  165698 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661 for IP: 192.168.61.164
	I0617 12:01:58.067924  165698 certs.go:194] generating shared ca certs ...
	I0617 12:01:58.067945  165698 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:58.068162  165698 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:01:58.068221  165698 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:01:58.068236  165698 certs.go:256] generating profile certs ...
	I0617 12:01:58.068352  165698 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.key
	I0617 12:01:58.068438  165698 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key.6c1f259c
	I0617 12:01:58.068493  165698 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.key
	I0617 12:01:58.068647  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:01:58.068690  165698 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:01:58.068704  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:01:58.068743  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:01:58.068790  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:01:58.068824  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:01:58.068877  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:58.069548  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:01:58.109048  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:01:58.134825  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:01:58.159910  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:01:58.191108  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0617 12:01:58.217407  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:01:58.242626  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:01:58.267261  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0617 12:01:58.291562  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:01:58.321848  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:01:58.352361  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:01:58.379343  165698 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:01:58.399146  165698 ssh_runner.go:195] Run: openssl version
	I0617 12:01:58.405081  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:01:58.415471  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.420046  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.420099  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.425886  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:01:58.436575  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:01:58.447166  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.451523  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.451582  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.457670  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:01:58.468667  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:01:58.479095  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.483744  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.483796  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.489520  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:01:58.500298  165698 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:01:58.504859  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:01:58.510619  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:01:58.516819  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:01:58.522837  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:01:58.528736  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:01:58.534585  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:01:58.540464  165698 kubeadm.go:391] StartCluster: {Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.164 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:01:58.540549  165698 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:01:58.540624  165698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:58.583638  165698 cri.go:89] found id: ""
	I0617 12:01:58.583724  165698 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:01:58.594266  165698 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:01:58.594290  165698 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:01:58.594295  165698 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:01:58.594354  165698 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:01:58.604415  165698 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:01:58.605367  165698 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-003661" does not appear in /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:01:58.605949  165698 kubeconfig.go:62] /home/jenkins/minikube-integration/19084-112967/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-003661" cluster setting kubeconfig missing "old-k8s-version-003661" context setting]
	I0617 12:01:58.606833  165698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:58.662621  165698 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:01:58.673813  165698 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.164
	I0617 12:01:58.673848  165698 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:01:58.673863  165698 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:01:58.673907  165698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:58.712607  165698 cri.go:89] found id: ""
	I0617 12:01:58.712703  165698 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:01:58.731676  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:01:58.741645  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:01:58.741666  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:01:58.741709  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:01:58.750871  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:01:58.750931  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:01:58.760545  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:01:58.769701  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:01:58.769776  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:01:58.779348  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:01:58.788507  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:01:58.788566  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:01:58.799220  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:01:58.808403  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:01:58.808468  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:01:58.818169  165698 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:01:58.828079  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:58.962164  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:59.679319  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:59.903216  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:00.026243  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:00.126201  165698 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:00.126314  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:00.627227  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:01.126836  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:01.626524  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:02.126619  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:02.626434  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:58.727229  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:01.226021  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:02.473035  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:02.473477  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:02:02.473505  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:02:02.473458  166802 retry.go:31] will retry after 3.132988331s: waiting for machine to come up
	I0617 12:02:05.607981  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:05.608354  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:02:05.608391  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:02:05.608310  166802 retry.go:31] will retry after 3.312972752s: waiting for machine to come up
	I0617 12:02:03.126687  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:03.626469  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:04.126347  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:04.626548  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:05.127142  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:05.626937  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:06.126479  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:06.626466  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:07.126806  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:07.626814  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:03.724216  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:06.224335  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:08.224842  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:10.217135  164809 start.go:364] duration metric: took 54.298812889s to acquireMachinesLock for "no-preload-152830"
	I0617 12:02:10.217192  164809 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:02:10.217204  164809 fix.go:54] fixHost starting: 
	I0617 12:02:10.217633  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:10.217673  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:10.238636  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44149
	I0617 12:02:10.239091  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:10.239596  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:02:10.239622  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:10.239997  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:10.240214  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:10.240397  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:02:10.242141  164809 fix.go:112] recreateIfNeeded on no-preload-152830: state=Stopped err=<nil>
	I0617 12:02:10.242162  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	W0617 12:02:10.242324  164809 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:02:10.244888  164809 out.go:177] * Restarting existing kvm2 VM for "no-preload-152830" ...
	I0617 12:02:08.922547  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.922966  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Found IP for machine: 192.168.50.125
	I0617 12:02:08.922996  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Reserving static IP address...
	I0617 12:02:08.923013  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has current primary IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.923437  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-991309", mac: "52:54:00:4e:6e:f5", ip: "192.168.50.125"} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:08.923484  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Reserved static IP address: 192.168.50.125
	I0617 12:02:08.923514  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | skip adding static IP to network mk-default-k8s-diff-port-991309 - found existing host DHCP lease matching {name: "default-k8s-diff-port-991309", mac: "52:54:00:4e:6e:f5", ip: "192.168.50.125"}
	I0617 12:02:08.923533  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Getting to WaitForSSH function...
	I0617 12:02:08.923550  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for SSH to be available...
	I0617 12:02:08.925667  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.926017  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:08.926050  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.926203  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Using SSH client type: external
	I0617 12:02:08.926228  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa (-rw-------)
	I0617 12:02:08.926269  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:02:08.926290  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | About to run SSH command:
	I0617 12:02:08.926316  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | exit 0
	I0617 12:02:09.051973  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | SSH cmd err, output: <nil>: 
	I0617 12:02:09.052329  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetConfigRaw
	I0617 12:02:09.052946  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:09.055156  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.055509  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.055541  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.055748  166103 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/config.json ...
	I0617 12:02:09.055940  166103 machine.go:94] provisionDockerMachine start ...
	I0617 12:02:09.055960  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:09.056162  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.058451  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.058826  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.058860  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.058961  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.059155  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.059289  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.059440  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.059583  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.059796  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.059813  166103 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:02:09.163974  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:02:09.164020  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetMachineName
	I0617 12:02:09.164281  166103 buildroot.go:166] provisioning hostname "default-k8s-diff-port-991309"
	I0617 12:02:09.164312  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetMachineName
	I0617 12:02:09.164499  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.167194  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.167606  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.167632  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.167856  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.168097  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.168285  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.168414  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.168571  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.168795  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.168811  166103 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-991309 && echo "default-k8s-diff-port-991309" | sudo tee /etc/hostname
	I0617 12:02:09.290435  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-991309
	
	I0617 12:02:09.290470  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.293538  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.293879  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.293902  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.294132  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.294361  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.294574  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.294753  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.294943  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.295188  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.295209  166103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-991309' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-991309/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-991309' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:02:09.408702  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:02:09.408736  166103 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:02:09.408777  166103 buildroot.go:174] setting up certificates
	I0617 12:02:09.408789  166103 provision.go:84] configureAuth start
	I0617 12:02:09.408798  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetMachineName
	I0617 12:02:09.409122  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:09.411936  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.412304  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.412335  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.412522  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.414598  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.414914  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.414942  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.415054  166103 provision.go:143] copyHostCerts
	I0617 12:02:09.415121  166103 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:02:09.415132  166103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:02:09.415182  166103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:02:09.415264  166103 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:02:09.415271  166103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:02:09.415290  166103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:02:09.415344  166103 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:02:09.415353  166103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:02:09.415378  166103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:02:09.415439  166103 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-991309 san=[127.0.0.1 192.168.50.125 default-k8s-diff-port-991309 localhost minikube]
	I0617 12:02:09.534010  166103 provision.go:177] copyRemoteCerts
	I0617 12:02:09.534082  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:02:09.534121  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.536707  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.537143  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.537176  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.537352  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.537516  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.537687  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.537840  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:09.622292  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0617 12:02:09.652653  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:02:09.676801  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:02:09.700701  166103 provision.go:87] duration metric: took 291.898478ms to configureAuth
	I0617 12:02:09.700734  166103 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:02:09.700931  166103 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:02:09.701023  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.703710  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.704138  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.704171  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.704330  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.704537  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.704730  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.704895  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.705058  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.705243  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.705262  166103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:02:09.974077  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:02:09.974109  166103 machine.go:97] duration metric: took 918.156221ms to provisionDockerMachine
	I0617 12:02:09.974120  166103 start.go:293] postStartSetup for "default-k8s-diff-port-991309" (driver="kvm2")
	I0617 12:02:09.974131  166103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:02:09.974155  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:09.974502  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:02:09.974544  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.977677  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.978073  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.978097  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.978225  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.978407  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.978583  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.978734  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:10.067068  166103 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:02:10.071843  166103 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:02:10.071870  166103 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:02:10.071934  166103 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:02:10.072024  166103 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:02:10.072128  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:02:10.082041  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:10.107855  166103 start.go:296] duration metric: took 133.717924ms for postStartSetup
	I0617 12:02:10.107903  166103 fix.go:56] duration metric: took 19.607369349s for fixHost
	I0617 12:02:10.107932  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:10.110742  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.111135  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.111169  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.111294  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:10.111527  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.111674  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.111861  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:10.111980  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:10.112205  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:10.112220  166103 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:02:10.216945  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625730.186446687
	
	I0617 12:02:10.216973  166103 fix.go:216] guest clock: 1718625730.186446687
	I0617 12:02:10.216983  166103 fix.go:229] Guest: 2024-06-17 12:02:10.186446687 +0000 UTC Remote: 2024-06-17 12:02:10.107909348 +0000 UTC m=+152.716337101 (delta=78.537339ms)
	I0617 12:02:10.217033  166103 fix.go:200] guest clock delta is within tolerance: 78.537339ms
	I0617 12:02:10.217039  166103 start.go:83] releasing machines lock for "default-k8s-diff-port-991309", held for 19.716554323s
	I0617 12:02:10.217073  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.217363  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:10.220429  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.220897  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.220927  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.221083  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.221655  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.221870  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.221965  166103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:02:10.222026  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:10.222094  166103 ssh_runner.go:195] Run: cat /version.json
	I0617 12:02:10.222122  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:10.225337  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.225673  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.225710  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.225730  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.226015  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:10.226172  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.226202  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.226242  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.226363  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:10.226447  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:10.226508  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.226591  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:10.226687  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:10.226840  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:10.334316  166103 ssh_runner.go:195] Run: systemctl --version
	I0617 12:02:10.340584  166103 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:02:10.489359  166103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:02:10.497198  166103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:02:10.497267  166103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:02:10.517001  166103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:02:10.517032  166103 start.go:494] detecting cgroup driver to use...
	I0617 12:02:10.517110  166103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:02:10.536520  166103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:02:10.550478  166103 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:02:10.550542  166103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:02:10.564437  166103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:02:10.578554  166103 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:02:10.710346  166103 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:02:10.891637  166103 docker.go:233] disabling docker service ...
	I0617 12:02:10.891694  166103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:02:10.908300  166103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:02:10.921663  166103 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:02:11.062715  166103 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:02:11.201061  166103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:02:11.216120  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:02:11.237213  166103 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:02:11.237286  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.248171  166103 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:02:11.248238  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.259159  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.270217  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.280841  166103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:02:11.291717  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.302084  166103 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.319559  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.331992  166103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:02:11.342435  166103 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:02:11.342494  166103 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:02:11.357436  166103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:02:11.367406  166103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:11.493416  166103 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:02:11.629980  166103 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:02:11.630055  166103 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:02:11.636456  166103 start.go:562] Will wait 60s for crictl version
	I0617 12:02:11.636540  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:02:11.642817  166103 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:02:11.681563  166103 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:02:11.681655  166103 ssh_runner.go:195] Run: crio --version
	I0617 12:02:11.712576  166103 ssh_runner.go:195] Run: crio --version
	I0617 12:02:11.753826  166103 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:02:11.755256  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:11.758628  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:11.759006  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:11.759041  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:11.759252  166103 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0617 12:02:11.763743  166103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:11.780286  166103 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:02:11.780455  166103 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:02:11.780528  166103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:02:11.819396  166103 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:02:11.819481  166103 ssh_runner.go:195] Run: which lz4
	I0617 12:02:11.824047  166103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:02:11.828770  166103 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:02:11.828807  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0617 12:02:08.127233  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:08.626498  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:09.126712  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:09.627284  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.126446  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.627249  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:11.126428  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:11.626638  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:12.127091  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:12.627361  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.226209  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:12.227824  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:10.246388  164809 main.go:141] libmachine: (no-preload-152830) Calling .Start
	I0617 12:02:10.246608  164809 main.go:141] libmachine: (no-preload-152830) Ensuring networks are active...
	I0617 12:02:10.247397  164809 main.go:141] libmachine: (no-preload-152830) Ensuring network default is active
	I0617 12:02:10.247789  164809 main.go:141] libmachine: (no-preload-152830) Ensuring network mk-no-preload-152830 is active
	I0617 12:02:10.248192  164809 main.go:141] libmachine: (no-preload-152830) Getting domain xml...
	I0617 12:02:10.248869  164809 main.go:141] libmachine: (no-preload-152830) Creating domain...
	I0617 12:02:11.500721  164809 main.go:141] libmachine: (no-preload-152830) Waiting to get IP...
	I0617 12:02:11.501614  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:11.502169  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:11.502254  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:11.502131  166976 retry.go:31] will retry after 281.343691ms: waiting for machine to come up
	I0617 12:02:11.785597  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:11.786047  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:11.786082  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:11.785983  166976 retry.go:31] will retry after 303.221815ms: waiting for machine to come up
	I0617 12:02:12.090367  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:12.090919  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:12.090945  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:12.090826  166976 retry.go:31] will retry after 422.250116ms: waiting for machine to come up
	I0617 12:02:12.514456  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:12.515026  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:12.515055  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:12.515001  166976 retry.go:31] will retry after 513.394077ms: waiting for machine to come up
	I0617 12:02:13.029811  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:13.030495  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:13.030522  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:13.030449  166976 retry.go:31] will retry after 596.775921ms: waiting for machine to come up
	I0617 12:02:13.387031  166103 crio.go:462] duration metric: took 1.563017054s to copy over tarball
	I0617 12:02:13.387108  166103 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:02:15.664139  166103 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.276994761s)
	I0617 12:02:15.664177  166103 crio.go:469] duration metric: took 2.277117031s to extract the tarball
	I0617 12:02:15.664188  166103 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:02:15.703690  166103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:02:15.757605  166103 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 12:02:15.757634  166103 cache_images.go:84] Images are preloaded, skipping loading
	I0617 12:02:15.757644  166103 kubeadm.go:928] updating node { 192.168.50.125 8444 v1.30.1 crio true true} ...
	I0617 12:02:15.757784  166103 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-991309 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:02:15.757874  166103 ssh_runner.go:195] Run: crio config
	I0617 12:02:15.808350  166103 cni.go:84] Creating CNI manager for ""
	I0617 12:02:15.808380  166103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:15.808397  166103 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:02:15.808434  166103 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.125 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-991309 NodeName:default-k8s-diff-port-991309 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:02:15.808633  166103 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.125
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-991309"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:02:15.808709  166103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:02:15.818891  166103 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:02:15.818964  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:02:15.828584  166103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0617 12:02:15.846044  166103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:02:15.862572  166103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0617 12:02:15.880042  166103 ssh_runner.go:195] Run: grep 192.168.50.125	control-plane.minikube.internal$ /etc/hosts
	I0617 12:02:15.884470  166103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:15.897031  166103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:16.013826  166103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:02:16.030366  166103 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309 for IP: 192.168.50.125
	I0617 12:02:16.030391  166103 certs.go:194] generating shared ca certs ...
	I0617 12:02:16.030408  166103 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:16.030590  166103 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:02:16.030650  166103 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:02:16.030668  166103 certs.go:256] generating profile certs ...
	I0617 12:02:16.030793  166103 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/client.key
	I0617 12:02:16.030876  166103 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/apiserver.key.02769a34
	I0617 12:02:16.030919  166103 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/proxy-client.key
	I0617 12:02:16.031024  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:02:16.031051  166103 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:02:16.031060  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:02:16.031080  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:02:16.031103  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:02:16.031122  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:02:16.031179  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:16.031991  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:02:16.066789  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:02:16.094522  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:02:16.119693  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:02:16.155810  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0617 12:02:16.186788  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:02:16.221221  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:02:16.248948  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:02:16.273404  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:02:16.296958  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:02:16.320047  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:02:16.349598  166103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:02:16.367499  166103 ssh_runner.go:195] Run: openssl version
	I0617 12:02:16.373596  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:02:16.384778  166103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:02:16.389521  166103 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:02:16.389574  166103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:02:16.395523  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:02:16.406357  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:02:16.417139  166103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:02:16.421629  166103 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:02:16.421679  166103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:02:16.427323  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:02:16.438649  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:02:16.450042  166103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:16.454587  166103 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:16.454636  166103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:16.460677  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:02:16.472886  166103 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:02:16.477630  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:02:16.483844  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:02:16.490123  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:02:16.497606  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:02:16.504066  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:02:16.510597  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:02:16.518270  166103 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:02:16.518371  166103 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:02:16.518439  166103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:16.569103  166103 cri.go:89] found id: ""
	I0617 12:02:16.569179  166103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:02:16.580328  166103 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:02:16.580353  166103 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:02:16.580360  166103 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:02:16.580409  166103 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:02:16.591277  166103 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:02:16.592450  166103 kubeconfig.go:125] found "default-k8s-diff-port-991309" server: "https://192.168.50.125:8444"
	I0617 12:02:16.594770  166103 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:02:16.605669  166103 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.125
	I0617 12:02:16.605728  166103 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:02:16.605745  166103 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:02:16.605810  166103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:16.654529  166103 cri.go:89] found id: ""
	I0617 12:02:16.654620  166103 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:02:16.672923  166103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:02:16.683485  166103 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:02:16.683514  166103 kubeadm.go:156] found existing configuration files:
	
	I0617 12:02:16.683576  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0617 12:02:16.693533  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:02:16.693614  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:02:16.703670  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0617 12:02:16.716352  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:02:16.716413  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:02:16.729336  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0617 12:02:16.739183  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:02:16.739249  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:02:16.748978  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0617 12:02:16.758195  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:02:16.758262  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:02:16.767945  166103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:02:16.777773  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:16.919605  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:13.126836  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:13.626460  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.127261  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.627161  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:15.126580  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:15.627082  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:16.127163  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:16.626524  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:17.126469  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:17.626488  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.728717  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:17.225452  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:13.629097  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:13.629723  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:13.629826  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:13.629705  166976 retry.go:31] will retry after 588.18471ms: waiting for machine to come up
	I0617 12:02:14.219111  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:14.219672  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:14.219704  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:14.219611  166976 retry.go:31] will retry after 889.359727ms: waiting for machine to come up
	I0617 12:02:15.110916  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:15.111528  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:15.111559  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:15.111473  166976 retry.go:31] will retry after 1.139454059s: waiting for machine to come up
	I0617 12:02:16.252051  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:16.252601  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:16.252636  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:16.252534  166976 retry.go:31] will retry after 1.189357648s: waiting for machine to come up
	I0617 12:02:17.443845  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:17.444370  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:17.444403  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:17.444310  166976 retry.go:31] will retry after 1.614769478s: waiting for machine to come up
	I0617 12:02:18.068811  166103 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.149162388s)
	I0617 12:02:18.068870  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:18.301209  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:18.362153  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:18.454577  166103 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:18.454674  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:18.954929  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.454795  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.505453  166103 api_server.go:72] duration metric: took 1.050874914s to wait for apiserver process to appear ...
	I0617 12:02:19.505490  166103 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:02:19.505518  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:19.506056  166103 api_server.go:269] stopped: https://192.168.50.125:8444/healthz: Get "https://192.168.50.125:8444/healthz": dial tcp 192.168.50.125:8444: connect: connection refused
	I0617 12:02:20.005681  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:22.216162  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:22.216214  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:22.216234  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:22.239561  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:22.239635  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:18.126897  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:18.627145  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.126724  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.626498  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:20.126389  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:20.627190  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:21.126480  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:21.627210  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:22.127273  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:22.626691  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.227344  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:21.725689  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:19.061035  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:19.061555  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:19.061588  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:19.061520  166976 retry.go:31] will retry after 2.385838312s: waiting for machine to come up
	I0617 12:02:21.448745  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:21.449239  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:21.449266  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:21.449208  166976 retry.go:31] will retry after 3.308788046s: waiting for machine to come up
	I0617 12:02:22.505636  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:22.509888  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:22.509916  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:23.006285  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:23.011948  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:23.011983  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:23.505640  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:23.510358  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 200:
	ok
	I0617 12:02:23.516663  166103 api_server.go:141] control plane version: v1.30.1
	I0617 12:02:23.516686  166103 api_server.go:131] duration metric: took 4.011188976s to wait for apiserver health ...
	I0617 12:02:23.516694  166103 cni.go:84] Creating CNI manager for ""
	I0617 12:02:23.516700  166103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:23.518498  166103 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:02:23.519722  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:02:23.530145  166103 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:02:23.552805  166103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:02:23.564825  166103 system_pods.go:59] 8 kube-system pods found
	I0617 12:02:23.564853  166103 system_pods.go:61] "coredns-7db6d8ff4d-mnw24" [1e6c4ff3-f0dc-43da-abd8-baaed7dca40c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0617 12:02:23.564863  166103 system_pods.go:61] "etcd-default-k8s-diff-port-991309" [820a4f27-cf83-4edb-a2ea-edba6673d851] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0617 12:02:23.564871  166103 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991309" [26e6c19d-6f70-4924-83f5-563c8508c9e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 12:02:23.564877  166103 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991309" [01e7c468-98a6-48f3-a158-59e97fa8279c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 12:02:23.564885  166103 system_pods.go:61] "kube-proxy-jn5kp" [d6935148-7ee8-4655-8327-9f1ee4c933de] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0617 12:02:23.564894  166103 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991309" [53ecd22c-05cf-48a5-b7e5-925392085f7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 12:02:23.564899  166103 system_pods.go:61] "metrics-server-569cc877fc-n2svp" [5b637d97-3183-4324-98cf-dd69a2968578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:02:23.564908  166103 system_pods.go:61] "storage-provisioner" [92b20aec-29c2-4256-86be-7f58f66585dd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 12:02:23.564913  166103 system_pods.go:74] duration metric: took 12.089276ms to wait for pod list to return data ...
	I0617 12:02:23.564919  166103 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:02:23.573455  166103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:02:23.573480  166103 node_conditions.go:123] node cpu capacity is 2
	I0617 12:02:23.573492  166103 node_conditions.go:105] duration metric: took 8.568721ms to run NodePressure ...
	I0617 12:02:23.573509  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:23.918292  166103 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 12:02:23.922992  166103 kubeadm.go:733] kubelet initialised
	I0617 12:02:23.923019  166103 kubeadm.go:734] duration metric: took 4.69627ms waiting for restarted kubelet to initialise ...
	I0617 12:02:23.923027  166103 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:23.927615  166103 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.932203  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.932225  166103 pod_ready.go:81] duration metric: took 4.590359ms for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.932233  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.932239  166103 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.936802  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.936825  166103 pod_ready.go:81] duration metric: took 4.579036ms for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.936835  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.936840  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.942877  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.942903  166103 pod_ready.go:81] duration metric: took 6.055748ms for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.942927  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.942935  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.955830  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.955851  166103 pod_ready.go:81] duration metric: took 12.903911ms for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.955861  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.955869  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:24.356654  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-proxy-jn5kp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.356682  166103 pod_ready.go:81] duration metric: took 400.805294ms for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:24.356692  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-proxy-jn5kp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.356699  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:24.765108  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.765133  166103 pod_ready.go:81] duration metric: took 408.42568ms for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:24.765145  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.765152  166103 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:25.156898  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:25.156927  166103 pod_ready.go:81] duration metric: took 391.769275ms for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:25.156939  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:25.156946  166103 pod_ready.go:38] duration metric: took 1.233911476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:25.156968  166103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:02:25.170925  166103 ops.go:34] apiserver oom_adj: -16
	I0617 12:02:25.170963  166103 kubeadm.go:591] duration metric: took 8.590593327s to restartPrimaryControlPlane
	I0617 12:02:25.170976  166103 kubeadm.go:393] duration metric: took 8.652716269s to StartCluster
	I0617 12:02:25.170998  166103 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:25.171111  166103 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:02:25.173919  166103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:25.174286  166103 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:02:25.176186  166103 out.go:177] * Verifying Kubernetes components...
	I0617 12:02:25.174347  166103 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:02:25.174528  166103 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:02:25.177622  166103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:25.177632  166103 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-991309"
	I0617 12:02:25.177670  166103 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-991309"
	W0617 12:02:25.177684  166103 addons.go:243] addon metrics-server should already be in state true
	I0617 12:02:25.177721  166103 host.go:66] Checking if "default-k8s-diff-port-991309" exists ...
	I0617 12:02:25.177622  166103 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-991309"
	I0617 12:02:25.177789  166103 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-991309"
	W0617 12:02:25.177806  166103 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:02:25.177837  166103 host.go:66] Checking if "default-k8s-diff-port-991309" exists ...
	I0617 12:02:25.177628  166103 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-991309"
	I0617 12:02:25.177875  166103 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-991309"
	I0617 12:02:25.178173  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.178202  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.178251  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.178282  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.178299  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.178318  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.198817  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0617 12:02:25.199064  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0617 12:02:25.199513  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I0617 12:02:25.199902  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.199919  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.200633  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.201080  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.201110  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.201270  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.201286  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.201415  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.201427  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.201482  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.201786  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.201845  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.202268  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.202637  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.202663  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.202989  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.203038  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.206439  166103 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-991309"
	W0617 12:02:25.206462  166103 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:02:25.206492  166103 host.go:66] Checking if "default-k8s-diff-port-991309" exists ...
	I0617 12:02:25.206875  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.206921  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.218501  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37189
	I0617 12:02:25.218532  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34089
	I0617 12:02:25.218912  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.218986  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.219410  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.219429  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.219545  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.219561  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.219917  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.219920  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.220110  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.220111  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.221839  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:25.223920  166103 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:02:25.225213  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:02:25.225232  166103 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:02:25.225260  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:25.224029  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:25.228780  166103 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:25.227545  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0617 12:02:25.230084  166103 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:02:25.230100  166103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:02:25.230113  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:25.228465  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.229054  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:25.230179  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:25.229303  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.230215  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.230371  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:25.230542  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:25.230674  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:25.230723  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.230737  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.231150  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.231772  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.231802  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.234036  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.234476  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:25.234494  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.234755  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:25.234919  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:25.235079  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:25.235235  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:25.248352  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46349
	I0617 12:02:25.248851  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.249306  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.249330  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.249681  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.249873  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.251282  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:25.251512  166103 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:02:25.251529  166103 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:02:25.251551  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:25.253963  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.254458  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:25.254484  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.254628  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:25.254941  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:25.255229  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:25.255385  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:25.391207  166103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:02:25.411906  166103 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-991309" to be "Ready" ...
	I0617 12:02:25.476025  166103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:02:25.566470  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:02:25.566500  166103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:02:25.593744  166103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:02:25.620336  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:02:25.620371  166103 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:02:25.700009  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:02:25.700048  166103 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:02:25.769841  166103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:02:25.782207  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:25.782240  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:25.782576  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Closing plugin on server side
	I0617 12:02:25.782597  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:25.782610  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:25.782623  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:25.782632  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:25.782888  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:25.782916  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:25.789639  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:25.789662  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:25.789921  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:25.789941  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.600819  166103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.007014283s)
	I0617 12:02:26.600883  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.600898  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.600902  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.600917  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.601253  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601295  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.601305  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.601325  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601342  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.601353  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.601366  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.601370  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.601571  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Closing plugin on server side
	I0617 12:02:26.601590  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601600  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.601615  166103 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-991309"
	I0617 12:02:26.601626  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601635  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Closing plugin on server side
	I0617 12:02:26.601638  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.604200  166103 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0617 12:02:26.605477  166103 addons.go:510] duration metric: took 1.431148263s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0617 12:02:27.415122  166103 node_ready.go:53] node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.126888  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:23.627274  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.127019  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.627337  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:25.126642  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:25.627064  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:26.126606  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:26.626803  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:27.126825  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:27.626799  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.223344  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:26.225129  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:24.760577  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:24.761063  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:24.761095  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:24.760999  166976 retry.go:31] will retry after 3.793168135s: waiting for machine to come up
	I0617 12:02:28.558153  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.558708  164809 main.go:141] libmachine: (no-preload-152830) Found IP for machine: 192.168.39.173
	I0617 12:02:28.558735  164809 main.go:141] libmachine: (no-preload-152830) Reserving static IP address...
	I0617 12:02:28.558751  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has current primary IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.559214  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "no-preload-152830", mac: "52:54:00:c0:1a:fb", ip: "192.168.39.173"} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.559248  164809 main.go:141] libmachine: (no-preload-152830) DBG | skip adding static IP to network mk-no-preload-152830 - found existing host DHCP lease matching {name: "no-preload-152830", mac: "52:54:00:c0:1a:fb", ip: "192.168.39.173"}
	I0617 12:02:28.559263  164809 main.go:141] libmachine: (no-preload-152830) Reserved static IP address: 192.168.39.173
	I0617 12:02:28.559278  164809 main.go:141] libmachine: (no-preload-152830) Waiting for SSH to be available...
	I0617 12:02:28.559295  164809 main.go:141] libmachine: (no-preload-152830) DBG | Getting to WaitForSSH function...
	I0617 12:02:28.562122  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.562453  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.562482  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.562678  164809 main.go:141] libmachine: (no-preload-152830) DBG | Using SSH client type: external
	I0617 12:02:28.562706  164809 main.go:141] libmachine: (no-preload-152830) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa (-rw-------)
	I0617 12:02:28.562739  164809 main.go:141] libmachine: (no-preload-152830) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.173 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:02:28.562753  164809 main.go:141] libmachine: (no-preload-152830) DBG | About to run SSH command:
	I0617 12:02:28.562770  164809 main.go:141] libmachine: (no-preload-152830) DBG | exit 0
	I0617 12:02:28.687683  164809 main.go:141] libmachine: (no-preload-152830) DBG | SSH cmd err, output: <nil>: 
	I0617 12:02:28.688021  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetConfigRaw
	I0617 12:02:28.688649  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:28.691248  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.691585  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.691609  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.691895  164809 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/config.json ...
	I0617 12:02:28.692109  164809 machine.go:94] provisionDockerMachine start ...
	I0617 12:02:28.692132  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:28.692371  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:28.694371  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.694738  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.694766  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.694942  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:28.695130  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.695309  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.695490  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:28.695695  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:28.695858  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:28.695869  164809 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:02:28.803687  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:02:28.803726  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:02:28.803996  164809 buildroot.go:166] provisioning hostname "no-preload-152830"
	I0617 12:02:28.804031  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:02:28.804333  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:28.806959  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.807395  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.807424  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.807547  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:28.807725  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.807895  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.808057  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:28.808216  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:28.808420  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:28.808436  164809 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-152830 && echo "no-preload-152830" | sudo tee /etc/hostname
	I0617 12:02:28.931222  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-152830
	
	I0617 12:02:28.931259  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:28.934188  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.934536  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.934564  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.934822  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:28.935048  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.935218  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.935353  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:28.935593  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:28.935814  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:28.935837  164809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-152830' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-152830/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-152830' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:02:29.054126  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:02:29.054156  164809 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:02:29.054173  164809 buildroot.go:174] setting up certificates
	I0617 12:02:29.054184  164809 provision.go:84] configureAuth start
	I0617 12:02:29.054195  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:02:29.054490  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:29.057394  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.057797  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.057830  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.057963  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.060191  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.060485  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.060514  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.060633  164809 provision.go:143] copyHostCerts
	I0617 12:02:29.060708  164809 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:02:29.060722  164809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:02:29.060796  164809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:02:29.060963  164809 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:02:29.060978  164809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:02:29.061003  164809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:02:29.061065  164809 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:02:29.061072  164809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:02:29.061090  164809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:02:29.061139  164809 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.no-preload-152830 san=[127.0.0.1 192.168.39.173 localhost minikube no-preload-152830]
	I0617 12:02:29.321179  164809 provision.go:177] copyRemoteCerts
	I0617 12:02:29.321232  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:02:29.321256  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.324217  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.324612  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.324642  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.324836  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.325043  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.325227  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.325386  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:29.410247  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:02:29.435763  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0617 12:02:29.462900  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:02:29.491078  164809 provision.go:87] duration metric: took 436.876068ms to configureAuth
	I0617 12:02:29.491120  164809 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:02:29.491377  164809 config.go:182] Loaded profile config "no-preload-152830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:02:29.491522  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.494581  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.495019  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.495052  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.495245  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.495555  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.495766  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.495897  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.496068  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:29.496275  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:29.496296  164809 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:02:29.774692  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:02:29.774730  164809 machine.go:97] duration metric: took 1.082604724s to provisionDockerMachine
	I0617 12:02:29.774748  164809 start.go:293] postStartSetup for "no-preload-152830" (driver="kvm2")
	I0617 12:02:29.774765  164809 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:02:29.774785  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:29.775181  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:02:29.775220  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.778574  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.778959  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.778988  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.779154  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.779351  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.779575  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.779750  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:29.866959  164809 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:02:29.871319  164809 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:02:29.871348  164809 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:02:29.871425  164809 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:02:29.871535  164809 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:02:29.871648  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:02:29.881995  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:29.907614  164809 start.go:296] duration metric: took 132.84708ms for postStartSetup
	I0617 12:02:29.907669  164809 fix.go:56] duration metric: took 19.690465972s for fixHost
	I0617 12:02:29.907695  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.910226  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.910617  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.910644  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.910811  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.911162  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.911377  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.911571  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.911772  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:29.911961  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:29.911972  164809 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:02:30.021051  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625749.993041026
	
	I0617 12:02:30.021079  164809 fix.go:216] guest clock: 1718625749.993041026
	I0617 12:02:30.021088  164809 fix.go:229] Guest: 2024-06-17 12:02:29.993041026 +0000 UTC Remote: 2024-06-17 12:02:29.907674102 +0000 UTC m=+356.579226401 (delta=85.366924ms)
	I0617 12:02:30.021113  164809 fix.go:200] guest clock delta is within tolerance: 85.366924ms
	I0617 12:02:30.021120  164809 start.go:83] releasing machines lock for "no-preload-152830", held for 19.803953246s
	I0617 12:02:30.021148  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.021403  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:30.024093  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.024600  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:30.024633  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.024830  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.025380  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.025552  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.025623  164809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:02:30.025668  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:30.025767  164809 ssh_runner.go:195] Run: cat /version.json
	I0617 12:02:30.025798  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:30.028656  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.028826  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.029037  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:30.029068  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.029294  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:30.029336  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:30.029366  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.029528  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:30.029536  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:30.029764  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:30.029776  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:30.029957  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:30.029984  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:30.030161  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:30.135901  164809 ssh_runner.go:195] Run: systemctl --version
	I0617 12:02:30.142668  164809 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:02:30.296485  164809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:02:30.302789  164809 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:02:30.302856  164809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:02:30.319775  164809 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:02:30.319793  164809 start.go:494] detecting cgroup driver to use...
	I0617 12:02:30.319894  164809 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:02:30.335498  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:02:30.349389  164809 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:02:30.349427  164809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:02:30.363086  164809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:02:30.377383  164809 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:02:30.499956  164809 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:02:30.644098  164809 docker.go:233] disabling docker service ...
	I0617 12:02:30.644178  164809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:02:30.661490  164809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:02:30.675856  164809 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:02:30.819937  164809 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:02:30.932926  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:02:30.947638  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:02:30.966574  164809 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:02:30.966648  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:30.978339  164809 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:02:30.978416  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:30.989950  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.000644  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.011280  164809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:02:31.022197  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.032780  164809 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.050053  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.062065  164809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:02:31.073296  164809 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:02:31.073368  164809 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:02:31.087733  164809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:02:31.098019  164809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:31.232495  164809 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:02:31.371236  164809 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:02:31.371312  164809 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:02:31.376442  164809 start.go:562] Will wait 60s for crictl version
	I0617 12:02:31.376522  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.380416  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:02:31.426664  164809 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:02:31.426763  164809 ssh_runner.go:195] Run: crio --version
	I0617 12:02:31.456696  164809 ssh_runner.go:195] Run: crio --version
	I0617 12:02:31.487696  164809 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:02:29.416369  166103 node_ready.go:53] node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:31.417357  166103 node_ready.go:53] node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:28.126854  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:28.627278  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:29.126577  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:29.626475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:30.127193  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:30.627229  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:31.126478  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:31.626336  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:32.126398  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:32.627005  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:28.724801  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:30.726589  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:33.225707  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:31.488972  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:31.491812  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:31.492191  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:31.492220  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:31.492411  164809 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 12:02:31.497100  164809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:31.510949  164809 kubeadm.go:877] updating cluster {Name:no-preload-152830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-152830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:02:31.511079  164809 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:02:31.511114  164809 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:02:31.546350  164809 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:02:31.546377  164809 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 12:02:31.546440  164809 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.546452  164809 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.546478  164809 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.546485  164809 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.546513  164809 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.546513  164809 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.546458  164809 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.546569  164809 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0617 12:02:31.548101  164809 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.548123  164809 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.548123  164809 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.548137  164809 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.548101  164809 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.548104  164809 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0617 12:02:31.548103  164809 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.548427  164809 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.714107  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.714819  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0617 12:02:31.715764  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.721844  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.722172  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.739873  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.746705  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.814194  164809 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0617 12:02:31.814235  164809 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.814273  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.849549  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.950803  164809 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0617 12:02:31.950858  164809 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.950907  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.950934  164809 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0617 12:02:31.950959  164809 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.950992  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951005  164809 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0617 12:02:31.951030  164809 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0617 12:02:31.951053  164809 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.951090  164809 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0617 12:02:31.951103  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951113  164809 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.951146  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951053  164809 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.951179  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951217  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.951266  164809 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0617 12:02:31.951289  164809 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.951319  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.967596  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.967802  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:32.018505  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:32.018542  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:32.018623  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:32.018664  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0617 12:02:32.018738  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:32.018755  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0617 12:02:32.026154  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0617 12:02:32.026270  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0617 12:02:32.046161  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0617 12:02:32.046288  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0617 12:02:32.126665  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0617 12:02:32.126755  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0617 12:02:32.126765  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0617 12:02:32.126814  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0617 12:02:32.126829  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0617 12:02:32.126867  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0617 12:02:32.126898  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0617 12:02:32.126911  164809 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0617 12:02:32.126935  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0617 12:02:32.126965  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0617 12:02:32.127008  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0617 12:02:32.127058  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0617 12:02:32.127060  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0617 12:02:32.142790  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0617 12:02:32.142816  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0617 12:02:32.143132  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0617 12:02:32.915885  166103 node_ready.go:49] node "default-k8s-diff-port-991309" has status "Ready":"True"
	I0617 12:02:32.915912  166103 node_ready.go:38] duration metric: took 7.503979113s for node "default-k8s-diff-port-991309" to be "Ready" ...
	I0617 12:02:32.915924  166103 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:32.921198  166103 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:34.927290  166103 pod_ready.go:102] pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:33.126753  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:33.627017  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:34.126558  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:34.626976  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.126410  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.627309  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:36.126958  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:36.626349  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:37.126815  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:37.627332  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.724326  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:37.725145  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:36.125679  164809 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1: (3.998551072s)
	I0617 12:02:36.125727  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0617 12:02:36.125773  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.998809852s)
	I0617 12:02:36.125804  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0617 12:02:36.125838  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0617 12:02:36.125894  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0617 12:02:37.885028  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.759100554s)
	I0617 12:02:37.885054  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0617 12:02:37.885073  164809 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0617 12:02:37.885122  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0617 12:02:37.429419  166103 pod_ready.go:102] pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:39.933476  166103 pod_ready.go:92] pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.933508  166103 pod_ready.go:81] duration metric: took 7.012285571s for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.933521  166103 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.940139  166103 pod_ready.go:92] pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.940162  166103 pod_ready.go:81] duration metric: took 6.633405ms for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.940175  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.945285  166103 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.945305  166103 pod_ready.go:81] duration metric: took 5.12303ms for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.945317  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.950992  166103 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.951021  166103 pod_ready.go:81] duration metric: took 5.6962ms for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.951034  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.955874  166103 pod_ready.go:92] pod "kube-proxy-jn5kp" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.955894  166103 pod_ready.go:81] duration metric: took 4.852842ms for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.955905  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:40.327000  166103 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:40.327035  166103 pod_ready.go:81] duration metric: took 371.121545ms for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:40.327049  166103 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:42.334620  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:38.126868  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:38.627367  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.127148  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.626571  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:40.126379  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:40.626747  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:41.126485  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:41.626372  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:42.126904  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:42.627293  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.727666  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:42.223700  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:39.992863  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.10770953s)
	I0617 12:02:39.992903  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0617 12:02:39.992934  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0617 12:02:39.992989  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0617 12:02:41.851420  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.858400961s)
	I0617 12:02:41.851452  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0617 12:02:41.851508  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0617 12:02:41.851578  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0617 12:02:44.833842  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:46.834443  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:43.127137  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:43.626521  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.127017  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.626824  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:45.126475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:45.626535  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:46.127423  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:46.626605  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:47.127029  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:47.627431  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.224685  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:46.225071  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:44.211669  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.360046418s)
	I0617 12:02:44.211702  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0617 12:02:44.211726  164809 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0617 12:02:44.211795  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0617 12:02:45.162389  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0617 12:02:45.162456  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0617 12:02:45.162542  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0617 12:02:47.414088  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.251500525s)
	I0617 12:02:47.414130  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0617 12:02:47.414164  164809 cache_images.go:123] Successfully loaded all cached images
	I0617 12:02:47.414172  164809 cache_images.go:92] duration metric: took 15.867782566s to LoadCachedImages
	I0617 12:02:47.414195  164809 kubeadm.go:928] updating node { 192.168.39.173 8443 v1.30.1 crio true true} ...
	I0617 12:02:47.414359  164809 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-152830 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-152830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:02:47.414451  164809 ssh_runner.go:195] Run: crio config
	I0617 12:02:47.466472  164809 cni.go:84] Creating CNI manager for ""
	I0617 12:02:47.466493  164809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:47.466503  164809 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:02:47.466531  164809 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.173 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-152830 NodeName:no-preload-152830 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:02:47.466716  164809 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-152830"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.173"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:02:47.466793  164809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:02:47.478163  164809 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:02:47.478255  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:02:47.488014  164809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0617 12:02:47.505143  164809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:02:47.522481  164809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0617 12:02:47.545714  164809 ssh_runner.go:195] Run: grep 192.168.39.173	control-plane.minikube.internal$ /etc/hosts
	I0617 12:02:47.551976  164809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.173	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:47.565374  164809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:47.694699  164809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:02:47.714017  164809 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830 for IP: 192.168.39.173
	I0617 12:02:47.714044  164809 certs.go:194] generating shared ca certs ...
	I0617 12:02:47.714064  164809 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:47.714260  164809 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:02:47.714321  164809 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:02:47.714335  164809 certs.go:256] generating profile certs ...
	I0617 12:02:47.714419  164809 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/client.key
	I0617 12:02:47.714504  164809 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/apiserver.key.d2d5b47b
	I0617 12:02:47.714547  164809 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/proxy-client.key
	I0617 12:02:47.714655  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:02:47.714684  164809 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:02:47.714693  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:02:47.714719  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:02:47.714745  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:02:47.714780  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:02:47.714815  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:47.715578  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:02:47.767301  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:02:47.804542  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:02:47.842670  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:02:47.874533  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0617 12:02:47.909752  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:02:47.940097  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:02:47.965441  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:02:47.990862  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:02:48.015935  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:02:48.041408  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:02:48.066557  164809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:02:48.084630  164809 ssh_runner.go:195] Run: openssl version
	I0617 12:02:48.091098  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:02:48.102447  164809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:02:48.107238  164809 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:02:48.107299  164809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:02:48.113682  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:02:48.124472  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:02:48.135897  164809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:02:48.140859  164809 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:02:48.140915  164809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:02:48.147113  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:02:48.158192  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:02:48.169483  164809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:48.174241  164809 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:48.174294  164809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:48.180093  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:02:48.191082  164809 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:02:48.195770  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:02:48.201743  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:02:48.207452  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:02:48.213492  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:02:48.219435  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:02:48.226202  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:02:48.232291  164809 kubeadm.go:391] StartCluster: {Name:no-preload-152830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-152830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:02:48.232409  164809 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:02:48.232448  164809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:48.272909  164809 cri.go:89] found id: ""
	I0617 12:02:48.272972  164809 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:02:48.284185  164809 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:02:48.284212  164809 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:02:48.284221  164809 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:02:48.284266  164809 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:02:48.294653  164809 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:02:48.296091  164809 kubeconfig.go:125] found "no-preload-152830" server: "https://192.168.39.173:8443"
	I0617 12:02:48.298438  164809 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:02:48.307905  164809 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.173
	I0617 12:02:48.307932  164809 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:02:48.307945  164809 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:02:48.307990  164809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:48.356179  164809 cri.go:89] found id: ""
	I0617 12:02:48.356247  164809 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:02:49.333637  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:51.333927  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:48.127215  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:48.627013  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:49.126439  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:49.626831  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.126521  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.627178  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:51.126830  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:51.627091  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:52.127343  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:52.626635  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:48.724828  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:51.225321  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:48.377824  164809 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:02:48.389213  164809 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:02:48.389236  164809 kubeadm.go:156] found existing configuration files:
	
	I0617 12:02:48.389287  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:02:48.398559  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:02:48.398605  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:02:48.408243  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:02:48.417407  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:02:48.417451  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:02:48.427333  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:02:48.436224  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:02:48.436278  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:02:48.445378  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:02:48.454119  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:02:48.454170  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:02:48.463097  164809 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:02:48.472479  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:48.584018  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.392310  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.599840  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.662845  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.794357  164809 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:49.794459  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.295507  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.794968  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.832967  164809 api_server.go:72] duration metric: took 1.038610813s to wait for apiserver process to appear ...
	I0617 12:02:50.832993  164809 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:02:50.833017  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:50.833494  164809 api_server.go:269] stopped: https://192.168.39.173:8443/healthz: Get "https://192.168.39.173:8443/healthz": dial tcp 192.168.39.173:8443: connect: connection refused
	I0617 12:02:51.333910  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:53.534213  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:53.534246  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:53.534265  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:53.579857  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:53.579887  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:53.833207  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:53.863430  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:53.863485  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:54.333557  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:54.342474  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:54.342507  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:54.834092  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:54.839578  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0617 12:02:54.854075  164809 api_server.go:141] control plane version: v1.30.1
	I0617 12:02:54.854113  164809 api_server.go:131] duration metric: took 4.021112065s to wait for apiserver health ...
	I0617 12:02:54.854124  164809 cni.go:84] Creating CNI manager for ""
	I0617 12:02:54.854133  164809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:54.856029  164809 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:02:53.334898  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:55.834490  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:53.126693  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:53.627110  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:54.126653  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:54.626424  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:55.127113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:55.627373  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:56.126415  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:56.627329  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:57.126797  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:57.627313  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:53.723948  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:56.225000  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:54.857252  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:02:54.914636  164809 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:02:54.961745  164809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:02:54.975140  164809 system_pods.go:59] 8 kube-system pods found
	I0617 12:02:54.975183  164809 system_pods.go:61] "coredns-7db6d8ff4d-7lfns" [83cf7962-1aa7-4de6-9e77-a03dee972ead] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0617 12:02:54.975192  164809 system_pods.go:61] "etcd-no-preload-152830" [27dace2b-9d7d-44e8-8f86-b20ce49c8afa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0617 12:02:54.975202  164809 system_pods.go:61] "kube-apiserver-no-preload-152830" [c102caaf-2289-4171-8b1f-89df4f6edf39] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 12:02:54.975213  164809 system_pods.go:61] "kube-controller-manager-no-preload-152830" [534a8f45-7886-4e12-b728-df686c2f8668] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 12:02:54.975220  164809 system_pods.go:61] "kube-proxy-bblgc" [70fa474e-cb6a-4e31-b978-78b47e9952a8] Running
	I0617 12:02:54.975228  164809 system_pods.go:61] "kube-scheduler-no-preload-152830" [17d696bd-55b3-4080-a63d-944216adf1d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 12:02:54.975240  164809 system_pods.go:61] "metrics-server-569cc877fc-97tqn" [0ce37c88-fd22-4001-96c4-d0f5239c0fd4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:02:54.975253  164809 system_pods.go:61] "storage-provisioner" [61dafb85-965b-4961-b9e1-e3202795caef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 12:02:54.975268  164809 system_pods.go:74] duration metric: took 13.492652ms to wait for pod list to return data ...
	I0617 12:02:54.975279  164809 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:02:54.980820  164809 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:02:54.980842  164809 node_conditions.go:123] node cpu capacity is 2
	I0617 12:02:54.980854  164809 node_conditions.go:105] duration metric: took 5.568037ms to run NodePressure ...
	I0617 12:02:54.980873  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:55.284669  164809 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 12:02:55.289433  164809 kubeadm.go:733] kubelet initialised
	I0617 12:02:55.289453  164809 kubeadm.go:734] duration metric: took 4.759785ms waiting for restarted kubelet to initialise ...
	I0617 12:02:55.289461  164809 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:55.294149  164809 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:55.298081  164809 pod_ready.go:97] node "no-preload-152830" hosting pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.298100  164809 pod_ready.go:81] duration metric: took 3.929974ms for pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:55.298109  164809 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-152830" hosting pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.298116  164809 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:55.302552  164809 pod_ready.go:97] node "no-preload-152830" hosting pod "etcd-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.302572  164809 pod_ready.go:81] duration metric: took 4.444579ms for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:55.302580  164809 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-152830" hosting pod "etcd-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.302585  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:55.306375  164809 pod_ready.go:97] node "no-preload-152830" hosting pod "kube-apiserver-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.306394  164809 pod_ready.go:81] duration metric: took 3.804134ms for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:55.306402  164809 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-152830" hosting pod "kube-apiserver-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.306407  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:57.313002  164809 pod_ready.go:102] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:57.834719  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:00.334129  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:58.126744  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:58.627050  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:59.127300  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:59.626694  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:00.127092  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:00.127182  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:00.166116  165698 cri.go:89] found id: ""
	I0617 12:03:00.166145  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.166153  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:00.166159  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:00.166208  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:00.200990  165698 cri.go:89] found id: ""
	I0617 12:03:00.201020  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.201029  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:00.201034  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:00.201086  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:00.236394  165698 cri.go:89] found id: ""
	I0617 12:03:00.236422  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.236430  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:00.236438  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:00.236496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:00.274257  165698 cri.go:89] found id: ""
	I0617 12:03:00.274285  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.274293  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:00.274299  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:00.274350  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:00.307425  165698 cri.go:89] found id: ""
	I0617 12:03:00.307452  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.307481  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:00.307490  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:00.307557  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:00.343420  165698 cri.go:89] found id: ""
	I0617 12:03:00.343446  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.343472  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:00.343480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:00.343541  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:00.378301  165698 cri.go:89] found id: ""
	I0617 12:03:00.378325  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.378333  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:00.378338  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:00.378383  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:00.414985  165698 cri.go:89] found id: ""
	I0617 12:03:00.415011  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.415018  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:00.415033  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:00.415090  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:00.468230  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:00.468262  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:00.481970  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:00.481998  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:00.612881  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:00.612911  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:00.612929  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:00.676110  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:00.676145  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:02:58.725617  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:01.225227  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:59.818063  164809 pod_ready.go:102] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:02.312898  164809 pod_ready.go:102] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:03.313300  164809 pod_ready.go:92] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:03:03.313332  164809 pod_ready.go:81] duration metric: took 8.006915719s for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:03.313347  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bblgc" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:03.319094  164809 pod_ready.go:92] pod "kube-proxy-bblgc" in "kube-system" namespace has status "Ready":"True"
	I0617 12:03:03.319116  164809 pod_ready.go:81] duration metric: took 5.762584ms for pod "kube-proxy-bblgc" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:03.319137  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:02.833031  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:04.834158  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:07.334894  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:03.216960  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:03.231208  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:03.231277  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:03.267056  165698 cri.go:89] found id: ""
	I0617 12:03:03.267088  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.267096  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:03.267103  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:03.267152  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:03.302797  165698 cri.go:89] found id: ""
	I0617 12:03:03.302832  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.302844  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:03.302852  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:03.302905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:03.343401  165698 cri.go:89] found id: ""
	I0617 12:03:03.343435  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.343445  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:03.343465  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:03.343530  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:03.380841  165698 cri.go:89] found id: ""
	I0617 12:03:03.380871  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.380883  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:03.380890  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:03.380951  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:03.420098  165698 cri.go:89] found id: ""
	I0617 12:03:03.420130  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.420142  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:03.420150  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:03.420213  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:03.458476  165698 cri.go:89] found id: ""
	I0617 12:03:03.458506  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.458515  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:03.458521  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:03.458586  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:03.497127  165698 cri.go:89] found id: ""
	I0617 12:03:03.497156  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.497164  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:03.497170  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:03.497217  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:03.538759  165698 cri.go:89] found id: ""
	I0617 12:03:03.538794  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.538806  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:03.538825  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:03.538841  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:03.584701  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:03.584743  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:03.636981  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:03.637030  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:03.670032  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:03.670077  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:03.757012  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:03.757038  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:03.757056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:06.327680  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:06.341998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:06.342068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:06.383353  165698 cri.go:89] found id: ""
	I0617 12:03:06.383385  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.383394  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:06.383400  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:06.383448  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:06.418806  165698 cri.go:89] found id: ""
	I0617 12:03:06.418850  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.418862  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:06.418870  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:06.418945  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:06.458151  165698 cri.go:89] found id: ""
	I0617 12:03:06.458192  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.458204  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:06.458219  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:06.458289  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:06.496607  165698 cri.go:89] found id: ""
	I0617 12:03:06.496637  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.496645  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:06.496651  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:06.496703  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:06.534900  165698 cri.go:89] found id: ""
	I0617 12:03:06.534938  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.534951  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:06.534959  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:06.535017  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:06.572388  165698 cri.go:89] found id: ""
	I0617 12:03:06.572413  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.572422  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:06.572428  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:06.572496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:06.608072  165698 cri.go:89] found id: ""
	I0617 12:03:06.608104  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.608115  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:06.608121  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:06.608175  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:06.647727  165698 cri.go:89] found id: ""
	I0617 12:03:06.647760  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.647772  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:06.647784  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:06.647800  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:06.720887  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:06.720919  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:06.761128  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:06.761153  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:06.815524  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:06.815557  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:06.830275  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:06.830304  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:06.907861  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:03.725650  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:06.225601  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:05.327062  164809 pod_ready.go:102] pod "kube-scheduler-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:07.325033  164809 pod_ready.go:92] pod "kube-scheduler-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:03:07.325061  164809 pod_ready.go:81] duration metric: took 4.005914462s for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:07.325072  164809 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:09.835374  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:12.334481  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:09.408117  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:09.420916  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:09.420978  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:09.453830  165698 cri.go:89] found id: ""
	I0617 12:03:09.453860  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.453870  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:09.453878  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:09.453937  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:09.492721  165698 cri.go:89] found id: ""
	I0617 12:03:09.492756  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.492766  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:09.492775  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:09.492849  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:09.530956  165698 cri.go:89] found id: ""
	I0617 12:03:09.530984  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.530995  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:09.531001  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:09.531067  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:09.571534  165698 cri.go:89] found id: ""
	I0617 12:03:09.571564  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.571576  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:09.571584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:09.571646  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:09.609740  165698 cri.go:89] found id: ""
	I0617 12:03:09.609776  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.609788  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:09.609797  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:09.609864  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:09.649958  165698 cri.go:89] found id: ""
	I0617 12:03:09.649998  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.650010  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:09.650020  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:09.650087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:09.706495  165698 cri.go:89] found id: ""
	I0617 12:03:09.706532  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.706544  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:09.706553  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:09.706638  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:09.742513  165698 cri.go:89] found id: ""
	I0617 12:03:09.742541  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.742549  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:09.742559  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:09.742571  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:09.756470  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:09.756502  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:09.840878  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:09.840897  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:09.840913  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:09.922329  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:09.922370  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:09.967536  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:09.967573  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:12.521031  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:12.534507  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:12.534595  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:12.569895  165698 cri.go:89] found id: ""
	I0617 12:03:12.569930  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.569942  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:12.569950  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:12.570005  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:12.606857  165698 cri.go:89] found id: ""
	I0617 12:03:12.606888  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.606900  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:12.606922  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:12.606998  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:12.640781  165698 cri.go:89] found id: ""
	I0617 12:03:12.640807  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.640818  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:12.640826  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:12.640910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:12.674097  165698 cri.go:89] found id: ""
	I0617 12:03:12.674124  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.674134  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:12.674142  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:12.674201  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:12.708662  165698 cri.go:89] found id: ""
	I0617 12:03:12.708689  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.708699  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:12.708707  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:12.708791  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:12.744891  165698 cri.go:89] found id: ""
	I0617 12:03:12.744927  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.744938  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:12.744947  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:12.745010  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:12.778440  165698 cri.go:89] found id: ""
	I0617 12:03:12.778466  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.778474  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:12.778480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:12.778528  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:12.814733  165698 cri.go:89] found id: ""
	I0617 12:03:12.814762  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.814770  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:12.814780  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:12.814820  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:12.887741  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:12.887762  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:12.887775  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:12.968439  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:12.968476  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:08.725485  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:11.224357  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:09.331004  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:11.331666  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:13.332269  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:14.335086  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:16.836397  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:13.008926  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:13.008955  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:13.060432  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:13.060468  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:15.575450  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:15.589178  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:15.589244  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:15.625554  165698 cri.go:89] found id: ""
	I0617 12:03:15.625589  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.625601  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:15.625608  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:15.625668  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:15.659023  165698 cri.go:89] found id: ""
	I0617 12:03:15.659054  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.659066  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:15.659074  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:15.659138  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:15.693777  165698 cri.go:89] found id: ""
	I0617 12:03:15.693803  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.693811  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:15.693817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:15.693875  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:15.729098  165698 cri.go:89] found id: ""
	I0617 12:03:15.729133  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.729141  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:15.729147  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:15.729194  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:15.762639  165698 cri.go:89] found id: ""
	I0617 12:03:15.762668  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.762679  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:15.762687  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:15.762744  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:15.797446  165698 cri.go:89] found id: ""
	I0617 12:03:15.797475  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.797484  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:15.797489  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:15.797537  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:15.832464  165698 cri.go:89] found id: ""
	I0617 12:03:15.832503  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.832513  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:15.832521  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:15.832579  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:15.867868  165698 cri.go:89] found id: ""
	I0617 12:03:15.867898  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.867906  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:15.867916  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:15.867928  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:15.882151  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:15.882181  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:15.946642  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:15.946666  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:15.946682  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:16.027062  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:16.027098  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:16.082704  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:16.082735  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:13.725854  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:16.225670  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:15.333470  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:17.832368  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:19.334102  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:21.334529  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:18.651554  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:18.665096  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:18.665166  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:18.703099  165698 cri.go:89] found id: ""
	I0617 12:03:18.703127  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.703138  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:18.703147  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:18.703210  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:18.737945  165698 cri.go:89] found id: ""
	I0617 12:03:18.737985  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.737997  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:18.738005  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:18.738079  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:18.777145  165698 cri.go:89] found id: ""
	I0617 12:03:18.777172  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.777181  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:18.777187  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:18.777255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:18.813171  165698 cri.go:89] found id: ""
	I0617 12:03:18.813198  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.813207  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:18.813213  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:18.813270  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:18.854459  165698 cri.go:89] found id: ""
	I0617 12:03:18.854490  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.854501  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:18.854510  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:18.854607  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:18.893668  165698 cri.go:89] found id: ""
	I0617 12:03:18.893703  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.893712  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:18.893718  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:18.893796  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:18.928919  165698 cri.go:89] found id: ""
	I0617 12:03:18.928971  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.928983  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:18.928993  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:18.929068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:18.965770  165698 cri.go:89] found id: ""
	I0617 12:03:18.965800  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.965808  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:18.965817  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:18.965829  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:19.020348  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:19.020392  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:19.034815  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:19.034845  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:19.109617  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:19.109643  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:19.109660  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:19.186843  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:19.186890  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:21.732720  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:21.747032  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:21.747113  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:21.789962  165698 cri.go:89] found id: ""
	I0617 12:03:21.789991  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.789999  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:21.790011  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:21.790066  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:21.833865  165698 cri.go:89] found id: ""
	I0617 12:03:21.833903  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.833913  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:21.833921  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:21.833985  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:21.903891  165698 cri.go:89] found id: ""
	I0617 12:03:21.903929  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.903941  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:21.903950  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:21.904020  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:21.941369  165698 cri.go:89] found id: ""
	I0617 12:03:21.941396  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.941407  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:21.941415  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:21.941473  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:21.977767  165698 cri.go:89] found id: ""
	I0617 12:03:21.977797  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.977808  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:21.977817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:21.977880  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:22.016422  165698 cri.go:89] found id: ""
	I0617 12:03:22.016450  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.016463  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:22.016471  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:22.016536  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:22.056871  165698 cri.go:89] found id: ""
	I0617 12:03:22.056904  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.056914  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:22.056922  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:22.056982  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:22.093244  165698 cri.go:89] found id: ""
	I0617 12:03:22.093288  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.093300  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:22.093313  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:22.093331  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:22.144722  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:22.144756  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:22.159047  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:22.159084  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:22.232077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:22.232100  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:22.232112  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:22.308241  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:22.308276  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:18.724648  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:21.224616  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:19.832543  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:21.838952  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:23.834640  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:26.336770  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:24.851740  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:24.866597  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:24.866659  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:24.902847  165698 cri.go:89] found id: ""
	I0617 12:03:24.902879  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.902892  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:24.902900  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:24.902973  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:24.940042  165698 cri.go:89] found id: ""
	I0617 12:03:24.940079  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.940088  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:24.940094  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:24.940150  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:24.975160  165698 cri.go:89] found id: ""
	I0617 12:03:24.975190  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.975202  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:24.975211  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:24.975280  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:25.012618  165698 cri.go:89] found id: ""
	I0617 12:03:25.012649  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.012657  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:25.012663  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:25.012712  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:25.051166  165698 cri.go:89] found id: ""
	I0617 12:03:25.051210  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.051223  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:25.051230  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:25.051309  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:25.090112  165698 cri.go:89] found id: ""
	I0617 12:03:25.090144  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.090156  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:25.090164  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:25.090230  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:25.133258  165698 cri.go:89] found id: ""
	I0617 12:03:25.133285  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.133294  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:25.133301  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:25.133366  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:25.177445  165698 cri.go:89] found id: ""
	I0617 12:03:25.177473  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.177481  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:25.177490  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:25.177505  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:25.250685  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:25.250710  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:25.250727  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:25.335554  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:25.335586  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:25.377058  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:25.377093  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:25.431425  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:25.431471  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:27.945063  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:27.959396  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:27.959469  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:23.725126  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:26.224114  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:28.224895  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:23.840550  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:26.333142  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:28.334577  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:28.337133  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:30.834142  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:27.994554  165698 cri.go:89] found id: ""
	I0617 12:03:27.994582  165698 logs.go:276] 0 containers: []
	W0617 12:03:27.994591  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:27.994598  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:27.994660  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:28.030168  165698 cri.go:89] found id: ""
	I0617 12:03:28.030200  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.030208  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:28.030215  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:28.030263  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:28.066213  165698 cri.go:89] found id: ""
	I0617 12:03:28.066244  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.066255  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:28.066261  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:28.066322  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:28.102855  165698 cri.go:89] found id: ""
	I0617 12:03:28.102880  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.102888  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:28.102894  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:28.102942  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:28.138698  165698 cri.go:89] found id: ""
	I0617 12:03:28.138734  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.138748  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:28.138755  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:28.138815  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:28.173114  165698 cri.go:89] found id: ""
	I0617 12:03:28.173140  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.173148  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:28.173154  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:28.173213  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:28.208901  165698 cri.go:89] found id: ""
	I0617 12:03:28.208936  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.208947  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:28.208955  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:28.209016  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:28.244634  165698 cri.go:89] found id: ""
	I0617 12:03:28.244667  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.244678  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:28.244687  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:28.244699  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:28.300303  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:28.300336  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:28.314227  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:28.314272  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:28.394322  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:28.394350  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:28.394367  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:28.483381  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:28.483413  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:31.026433  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:31.040820  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:31.040888  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:31.086409  165698 cri.go:89] found id: ""
	I0617 12:03:31.086440  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.086453  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:31.086461  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:31.086548  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:31.122810  165698 cri.go:89] found id: ""
	I0617 12:03:31.122836  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.122843  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:31.122849  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:31.122910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:31.157634  165698 cri.go:89] found id: ""
	I0617 12:03:31.157669  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.157680  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:31.157687  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:31.157750  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:31.191498  165698 cri.go:89] found id: ""
	I0617 12:03:31.191529  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.191541  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:31.191549  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:31.191619  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:31.225575  165698 cri.go:89] found id: ""
	I0617 12:03:31.225599  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.225609  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:31.225616  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:31.225670  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:31.269780  165698 cri.go:89] found id: ""
	I0617 12:03:31.269810  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.269819  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:31.269825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:31.269874  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:31.307689  165698 cri.go:89] found id: ""
	I0617 12:03:31.307717  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.307726  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:31.307733  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:31.307789  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:31.344160  165698 cri.go:89] found id: ""
	I0617 12:03:31.344190  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.344200  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:31.344210  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:31.344223  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:31.397627  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:31.397667  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:31.411316  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:31.411347  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:31.486258  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:31.486280  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:31.486297  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:31.568067  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:31.568106  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:30.725183  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:33.224294  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:30.834377  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:33.333070  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:33.335067  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:35.335626  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:37.336117  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:34.111424  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:34.127178  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:34.127255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:34.165900  165698 cri.go:89] found id: ""
	I0617 12:03:34.165936  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.165947  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:34.165955  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:34.166042  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:34.203556  165698 cri.go:89] found id: ""
	I0617 12:03:34.203588  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.203597  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:34.203606  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:34.203659  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:34.243418  165698 cri.go:89] found id: ""
	I0617 12:03:34.243478  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.243490  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:34.243499  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:34.243661  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:34.281542  165698 cri.go:89] found id: ""
	I0617 12:03:34.281569  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.281577  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:34.281582  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:34.281635  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:34.316304  165698 cri.go:89] found id: ""
	I0617 12:03:34.316333  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.316341  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:34.316347  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:34.316403  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:34.357416  165698 cri.go:89] found id: ""
	I0617 12:03:34.357455  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.357467  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:34.357476  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:34.357547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:34.392069  165698 cri.go:89] found id: ""
	I0617 12:03:34.392101  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.392112  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:34.392120  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:34.392185  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:34.427203  165698 cri.go:89] found id: ""
	I0617 12:03:34.427235  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.427247  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:34.427258  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:34.427317  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:34.441346  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:34.441375  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:34.519306  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:34.519331  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:34.519349  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:34.598802  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:34.598843  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:34.637521  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:34.637554  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:37.191259  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:37.205882  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:37.205947  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:37.242175  165698 cri.go:89] found id: ""
	I0617 12:03:37.242202  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.242209  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:37.242215  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:37.242278  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:37.278004  165698 cri.go:89] found id: ""
	I0617 12:03:37.278029  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.278037  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:37.278043  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:37.278091  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:37.322148  165698 cri.go:89] found id: ""
	I0617 12:03:37.322179  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.322190  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:37.322198  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:37.322259  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:37.358612  165698 cri.go:89] found id: ""
	I0617 12:03:37.358638  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.358649  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:37.358657  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:37.358718  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:37.393070  165698 cri.go:89] found id: ""
	I0617 12:03:37.393104  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.393115  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:37.393123  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:37.393187  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:37.429420  165698 cri.go:89] found id: ""
	I0617 12:03:37.429452  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.429465  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:37.429475  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:37.429541  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:37.464485  165698 cri.go:89] found id: ""
	I0617 12:03:37.464509  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.464518  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:37.464523  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:37.464584  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:37.501283  165698 cri.go:89] found id: ""
	I0617 12:03:37.501308  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.501316  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:37.501326  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:37.501338  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:37.552848  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:37.552889  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:37.566715  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:37.566746  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:37.643560  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:37.643584  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:37.643601  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:37.722895  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:37.722935  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:35.225442  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:37.225962  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:35.836693  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:38.332297  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:39.834655  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:42.333686  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:40.268199  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:40.281832  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:40.281905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:40.317094  165698 cri.go:89] found id: ""
	I0617 12:03:40.317137  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.317150  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:40.317159  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:40.317229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:40.355786  165698 cri.go:89] found id: ""
	I0617 12:03:40.355819  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.355829  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:40.355836  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:40.355903  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:40.394282  165698 cri.go:89] found id: ""
	I0617 12:03:40.394312  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.394323  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:40.394332  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:40.394388  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:40.433773  165698 cri.go:89] found id: ""
	I0617 12:03:40.433806  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.433817  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:40.433825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:40.433875  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:40.469937  165698 cri.go:89] found id: ""
	I0617 12:03:40.469973  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.469985  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:40.469998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:40.470067  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:40.503565  165698 cri.go:89] found id: ""
	I0617 12:03:40.503590  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.503599  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:40.503605  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:40.503654  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:40.538349  165698 cri.go:89] found id: ""
	I0617 12:03:40.538383  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.538394  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:40.538402  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:40.538461  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:40.576036  165698 cri.go:89] found id: ""
	I0617 12:03:40.576066  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.576075  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:40.576085  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:40.576100  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:40.617804  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:40.617833  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:40.668126  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:40.668162  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:40.682618  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:40.682655  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:40.759597  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:40.759619  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:40.759638  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:39.725534  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:42.223320  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:40.336855  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:42.832597  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:44.334430  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:46.835809  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:43.343404  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:43.357886  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:43.357953  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:43.398262  165698 cri.go:89] found id: ""
	I0617 12:03:43.398290  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.398301  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:43.398310  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:43.398370  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:43.432241  165698 cri.go:89] found id: ""
	I0617 12:03:43.432272  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.432280  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:43.432289  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:43.432348  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:43.466210  165698 cri.go:89] found id: ""
	I0617 12:03:43.466234  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.466241  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:43.466247  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:43.466294  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:43.501677  165698 cri.go:89] found id: ""
	I0617 12:03:43.501711  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.501723  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:43.501731  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:43.501793  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:43.541826  165698 cri.go:89] found id: ""
	I0617 12:03:43.541860  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.541870  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:43.541876  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:43.541941  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:43.576940  165698 cri.go:89] found id: ""
	I0617 12:03:43.576962  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.576970  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:43.576975  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:43.577022  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:43.612592  165698 cri.go:89] found id: ""
	I0617 12:03:43.612627  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.612635  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:43.612643  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:43.612694  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:43.647141  165698 cri.go:89] found id: ""
	I0617 12:03:43.647176  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.647188  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:43.647202  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:43.647220  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:43.698248  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:43.698283  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:43.711686  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:43.711714  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:43.787077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:43.787101  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:43.787115  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:43.861417  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:43.861455  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:46.402594  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:46.417108  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:46.417185  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:46.453910  165698 cri.go:89] found id: ""
	I0617 12:03:46.453941  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.453952  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:46.453960  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:46.454020  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:46.487239  165698 cri.go:89] found id: ""
	I0617 12:03:46.487268  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.487280  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:46.487288  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:46.487353  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:46.521824  165698 cri.go:89] found id: ""
	I0617 12:03:46.521850  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.521859  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:46.521866  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:46.521929  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:46.557247  165698 cri.go:89] found id: ""
	I0617 12:03:46.557274  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.557282  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:46.557289  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:46.557350  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:46.600354  165698 cri.go:89] found id: ""
	I0617 12:03:46.600383  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.600393  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:46.600402  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:46.600477  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:46.638153  165698 cri.go:89] found id: ""
	I0617 12:03:46.638180  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.638189  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:46.638197  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:46.638255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:46.672636  165698 cri.go:89] found id: ""
	I0617 12:03:46.672661  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.672669  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:46.672675  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:46.672721  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:46.706431  165698 cri.go:89] found id: ""
	I0617 12:03:46.706468  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.706481  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:46.706493  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:46.706509  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:46.720796  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:46.720842  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:46.801343  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:46.801365  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:46.801379  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:46.883651  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:46.883696  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:46.928594  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:46.928630  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:44.224037  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:46.224076  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:48.224472  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:45.332811  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:47.832461  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:49.334678  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:51.833994  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:49.480413  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:49.495558  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:49.495656  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:49.533281  165698 cri.go:89] found id: ""
	I0617 12:03:49.533313  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.533323  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:49.533330  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:49.533396  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:49.573430  165698 cri.go:89] found id: ""
	I0617 12:03:49.573457  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.573465  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:49.573472  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:49.573532  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:49.608669  165698 cri.go:89] found id: ""
	I0617 12:03:49.608697  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.608705  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:49.608711  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:49.608767  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:49.643411  165698 cri.go:89] found id: ""
	I0617 12:03:49.643449  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.643481  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:49.643490  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:49.643557  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:49.680039  165698 cri.go:89] found id: ""
	I0617 12:03:49.680071  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.680082  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:49.680090  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:49.680148  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:49.717169  165698 cri.go:89] found id: ""
	I0617 12:03:49.717195  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.717203  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:49.717209  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:49.717262  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:49.754585  165698 cri.go:89] found id: ""
	I0617 12:03:49.754615  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.754625  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:49.754633  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:49.754697  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:49.796040  165698 cri.go:89] found id: ""
	I0617 12:03:49.796074  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.796085  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:49.796097  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:49.796112  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:49.873496  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:49.873530  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:49.873547  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:49.961883  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:49.961925  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:50.002975  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:50.003004  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:50.054185  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:50.054224  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:52.568557  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:52.584264  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:52.584337  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:52.622474  165698 cri.go:89] found id: ""
	I0617 12:03:52.622501  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.622509  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:52.622516  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:52.622566  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:52.661012  165698 cri.go:89] found id: ""
	I0617 12:03:52.661045  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.661057  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:52.661066  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:52.661133  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:52.700950  165698 cri.go:89] found id: ""
	I0617 12:03:52.700986  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.700998  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:52.701006  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:52.701075  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:52.735663  165698 cri.go:89] found id: ""
	I0617 12:03:52.735689  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.735696  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:52.735702  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:52.735768  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:52.776540  165698 cri.go:89] found id: ""
	I0617 12:03:52.776568  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.776580  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:52.776589  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:52.776642  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:52.812439  165698 cri.go:89] found id: ""
	I0617 12:03:52.812474  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.812493  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:52.812503  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:52.812567  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:52.849233  165698 cri.go:89] found id: ""
	I0617 12:03:52.849263  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.849273  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:52.849281  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:52.849343  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:52.885365  165698 cri.go:89] found id: ""
	I0617 12:03:52.885395  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.885406  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:52.885419  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:52.885434  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:52.941521  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:52.941553  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:52.955958  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:52.955997  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:03:50.224702  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:52.724247  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:50.332871  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:52.832386  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:53.834382  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:55.834813  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	W0617 12:03:53.029254  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:53.029278  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:53.029291  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:53.104391  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:53.104425  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:55.648578  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:55.662143  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:55.662205  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:55.697623  165698 cri.go:89] found id: ""
	I0617 12:03:55.697662  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.697674  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:55.697682  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:55.697751  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:55.734132  165698 cri.go:89] found id: ""
	I0617 12:03:55.734171  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.734184  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:55.734192  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:55.734265  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:55.774178  165698 cri.go:89] found id: ""
	I0617 12:03:55.774212  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.774222  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:55.774231  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:55.774296  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:55.816427  165698 cri.go:89] found id: ""
	I0617 12:03:55.816460  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.816471  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:55.816480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:55.816546  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:55.860413  165698 cri.go:89] found id: ""
	I0617 12:03:55.860446  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.860457  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:55.860465  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:55.860532  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:55.897577  165698 cri.go:89] found id: ""
	I0617 12:03:55.897612  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.897622  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:55.897629  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:55.897682  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:55.934163  165698 cri.go:89] found id: ""
	I0617 12:03:55.934200  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.934212  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:55.934220  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:55.934291  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:55.972781  165698 cri.go:89] found id: ""
	I0617 12:03:55.972827  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.972840  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:55.972852  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:55.972867  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:56.027292  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:56.027332  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:56.042304  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:56.042336  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:56.115129  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:56.115159  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:56.115176  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:56.194161  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:56.194200  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:54.728169  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:57.225361  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:54.837170  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:57.333566  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:58.335846  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:00.833987  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:58.734681  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:58.748467  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:58.748534  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:58.786191  165698 cri.go:89] found id: ""
	I0617 12:03:58.786221  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.786232  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:58.786239  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:58.786302  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:58.822076  165698 cri.go:89] found id: ""
	I0617 12:03:58.822103  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.822125  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:58.822134  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:58.822199  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:58.858830  165698 cri.go:89] found id: ""
	I0617 12:03:58.858859  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.858867  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:58.858873  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:58.858927  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:58.898802  165698 cri.go:89] found id: ""
	I0617 12:03:58.898830  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.898838  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:58.898844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:58.898891  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:58.933234  165698 cri.go:89] found id: ""
	I0617 12:03:58.933269  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.933281  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:58.933289  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:58.933355  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:58.973719  165698 cri.go:89] found id: ""
	I0617 12:03:58.973753  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.973766  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:58.973773  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:58.973847  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:59.010671  165698 cri.go:89] found id: ""
	I0617 12:03:59.010722  165698 logs.go:276] 0 containers: []
	W0617 12:03:59.010734  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:59.010741  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:59.010805  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:59.047318  165698 cri.go:89] found id: ""
	I0617 12:03:59.047347  165698 logs.go:276] 0 containers: []
	W0617 12:03:59.047359  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:59.047372  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:59.047389  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:59.097778  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:59.097815  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:59.111615  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:59.111646  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:59.193172  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:59.193195  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:59.193207  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:59.268147  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:59.268182  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:01.807585  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:01.821634  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:01.821694  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:01.857610  165698 cri.go:89] found id: ""
	I0617 12:04:01.857637  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.857647  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:01.857654  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:01.857710  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:01.893229  165698 cri.go:89] found id: ""
	I0617 12:04:01.893253  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.893261  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:01.893267  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:01.893324  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:01.926916  165698 cri.go:89] found id: ""
	I0617 12:04:01.926940  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.926950  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:01.926958  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:01.927017  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:01.961913  165698 cri.go:89] found id: ""
	I0617 12:04:01.961946  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.961957  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:01.961967  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:01.962045  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:01.997084  165698 cri.go:89] found id: ""
	I0617 12:04:01.997111  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.997119  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:01.997125  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:01.997173  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:02.034640  165698 cri.go:89] found id: ""
	I0617 12:04:02.034666  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.034674  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:02.034680  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:02.034744  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:02.085868  165698 cri.go:89] found id: ""
	I0617 12:04:02.085910  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.085920  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:02.085928  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:02.085983  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:02.152460  165698 cri.go:89] found id: ""
	I0617 12:04:02.152487  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.152499  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:02.152513  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:02.152528  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:02.205297  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:02.205344  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:02.222312  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:02.222348  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:02.299934  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:02.299959  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:02.299977  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:02.384008  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:02.384056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:59.724730  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:02.227215  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:59.833621  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:01.833799  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:02.834076  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:04.836418  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:07.335024  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:04.926889  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:04.940643  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:04.940722  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:04.976246  165698 cri.go:89] found id: ""
	I0617 12:04:04.976275  165698 logs.go:276] 0 containers: []
	W0617 12:04:04.976283  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:04.976289  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:04.976338  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:05.015864  165698 cri.go:89] found id: ""
	I0617 12:04:05.015900  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.015913  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:05.015921  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:05.015985  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:05.054051  165698 cri.go:89] found id: ""
	I0617 12:04:05.054086  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.054099  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:05.054112  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:05.054177  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:05.090320  165698 cri.go:89] found id: ""
	I0617 12:04:05.090358  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.090371  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:05.090380  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:05.090438  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:05.126963  165698 cri.go:89] found id: ""
	I0617 12:04:05.126998  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.127008  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:05.127015  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:05.127087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:05.162565  165698 cri.go:89] found id: ""
	I0617 12:04:05.162600  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.162611  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:05.162620  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:05.162674  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:05.195706  165698 cri.go:89] found id: ""
	I0617 12:04:05.195743  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.195752  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:05.195758  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:05.195826  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:05.236961  165698 cri.go:89] found id: ""
	I0617 12:04:05.236995  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.237006  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:05.237016  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:05.237034  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:05.252754  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:05.252783  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:05.327832  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:05.327870  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:05.327886  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:05.410220  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:05.410271  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:05.451291  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:05.451324  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:04.725172  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:07.223627  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:04.332177  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:06.831712  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:09.834563  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:12.334095  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:08.003058  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:08.016611  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:08.016670  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:08.052947  165698 cri.go:89] found id: ""
	I0617 12:04:08.052984  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.052996  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:08.053004  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:08.053057  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:08.086668  165698 cri.go:89] found id: ""
	I0617 12:04:08.086695  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.086704  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:08.086711  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:08.086773  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:08.127708  165698 cri.go:89] found id: ""
	I0617 12:04:08.127738  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.127746  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:08.127752  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:08.127814  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:08.162930  165698 cri.go:89] found id: ""
	I0617 12:04:08.162959  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.162966  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:08.162973  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:08.163026  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:08.196757  165698 cri.go:89] found id: ""
	I0617 12:04:08.196782  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.196791  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:08.196797  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:08.196851  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:08.229976  165698 cri.go:89] found id: ""
	I0617 12:04:08.230006  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.230016  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:08.230022  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:08.230083  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:08.265969  165698 cri.go:89] found id: ""
	I0617 12:04:08.266000  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.266007  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:08.266013  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:08.266071  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:08.299690  165698 cri.go:89] found id: ""
	I0617 12:04:08.299717  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.299728  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:08.299741  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:08.299761  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:08.353399  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:08.353429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:08.366713  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:08.366739  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:08.442727  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:08.442768  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:08.442786  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:08.527832  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:08.527875  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:11.073616  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:11.087085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:11.087172  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:11.121706  165698 cri.go:89] found id: ""
	I0617 12:04:11.121745  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.121756  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:11.121765  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:11.121839  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:11.157601  165698 cri.go:89] found id: ""
	I0617 12:04:11.157637  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.157648  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:11.157657  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:11.157719  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:11.191929  165698 cri.go:89] found id: ""
	I0617 12:04:11.191963  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.191975  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:11.191983  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:11.192045  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:11.228391  165698 cri.go:89] found id: ""
	I0617 12:04:11.228416  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.228429  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:11.228437  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:11.228497  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:11.261880  165698 cri.go:89] found id: ""
	I0617 12:04:11.261911  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.261924  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:11.261932  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:11.261998  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:11.294615  165698 cri.go:89] found id: ""
	I0617 12:04:11.294663  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.294676  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:11.294684  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:11.294745  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:11.332813  165698 cri.go:89] found id: ""
	I0617 12:04:11.332840  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.332847  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:11.332854  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:11.332911  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:11.369032  165698 cri.go:89] found id: ""
	I0617 12:04:11.369060  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.369068  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:11.369078  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:11.369090  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:11.422522  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:11.422555  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:11.436961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:11.436990  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:11.508679  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:11.508700  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:11.508713  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:11.586574  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:11.586610  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:09.224727  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:11.225763  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:09.330868  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:11.332256  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:14.335171  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:16.836514  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:14.127034  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:14.143228  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:14.143306  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:14.178368  165698 cri.go:89] found id: ""
	I0617 12:04:14.178396  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.178405  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:14.178410  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:14.178459  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:14.209971  165698 cri.go:89] found id: ""
	I0617 12:04:14.210001  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.210010  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:14.210015  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:14.210065  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:14.244888  165698 cri.go:89] found id: ""
	I0617 12:04:14.244922  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.244933  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:14.244940  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:14.244999  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:14.277875  165698 cri.go:89] found id: ""
	I0617 12:04:14.277904  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.277914  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:14.277922  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:14.277983  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:14.312698  165698 cri.go:89] found id: ""
	I0617 12:04:14.312724  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.312733  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:14.312739  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:14.312789  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:14.350952  165698 cri.go:89] found id: ""
	I0617 12:04:14.350977  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.350987  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:14.350993  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:14.351056  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:14.389211  165698 cri.go:89] found id: ""
	I0617 12:04:14.389235  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.389243  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:14.389250  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:14.389297  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:14.426171  165698 cri.go:89] found id: ""
	I0617 12:04:14.426200  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.426211  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:14.426224  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:14.426240  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:14.500403  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:14.500430  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:14.500446  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:14.588041  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:14.588078  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:14.631948  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:14.631987  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:14.681859  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:14.681895  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:17.198754  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:17.212612  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:17.212679  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:17.251011  165698 cri.go:89] found id: ""
	I0617 12:04:17.251041  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.251056  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:17.251065  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:17.251128  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:17.282964  165698 cri.go:89] found id: ""
	I0617 12:04:17.282989  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.282998  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:17.283003  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:17.283060  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:17.315570  165698 cri.go:89] found id: ""
	I0617 12:04:17.315601  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.315622  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:17.315630  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:17.315691  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:17.351186  165698 cri.go:89] found id: ""
	I0617 12:04:17.351212  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.351221  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:17.351228  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:17.351287  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:17.385609  165698 cri.go:89] found id: ""
	I0617 12:04:17.385653  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.385665  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:17.385673  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:17.385741  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:17.423890  165698 cri.go:89] found id: ""
	I0617 12:04:17.423923  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.423935  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:17.423944  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:17.424000  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:17.459543  165698 cri.go:89] found id: ""
	I0617 12:04:17.459575  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.459584  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:17.459592  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:17.459660  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:17.495554  165698 cri.go:89] found id: ""
	I0617 12:04:17.495584  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.495594  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:17.495606  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:17.495632  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:17.547835  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:17.547881  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:17.562391  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:17.562422  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:17.635335  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:17.635368  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:17.635387  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:17.708946  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:17.708988  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:13.724618  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:16.224689  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:13.832533  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:15.833210  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:17.841693  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:19.336775  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:21.835598  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:20.249833  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:20.266234  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:20.266301  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:20.307380  165698 cri.go:89] found id: ""
	I0617 12:04:20.307415  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.307424  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:20.307431  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:20.307508  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:20.347193  165698 cri.go:89] found id: ""
	I0617 12:04:20.347225  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.347235  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:20.347243  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:20.347311  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:20.382673  165698 cri.go:89] found id: ""
	I0617 12:04:20.382711  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.382724  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:20.382732  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:20.382800  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:20.419542  165698 cri.go:89] found id: ""
	I0617 12:04:20.419573  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.419582  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:20.419588  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:20.419652  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:20.454586  165698 cri.go:89] found id: ""
	I0617 12:04:20.454618  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.454629  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:20.454636  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:20.454708  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:20.501094  165698 cri.go:89] found id: ""
	I0617 12:04:20.501123  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.501131  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:20.501137  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:20.501190  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:20.537472  165698 cri.go:89] found id: ""
	I0617 12:04:20.537512  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.537524  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:20.537532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:20.537597  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:20.571477  165698 cri.go:89] found id: ""
	I0617 12:04:20.571509  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.571519  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:20.571532  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:20.571550  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:20.611503  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:20.611540  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:20.663868  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:20.663905  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:20.677679  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:20.677704  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:20.753645  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:20.753663  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:20.753689  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:18.725428  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:21.224314  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:20.333214  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:22.333294  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:24.333835  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:26.335344  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:23.335535  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:23.349700  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:23.349766  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:23.384327  165698 cri.go:89] found id: ""
	I0617 12:04:23.384351  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.384358  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:23.384364  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:23.384417  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:23.427145  165698 cri.go:89] found id: ""
	I0617 12:04:23.427179  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.427190  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:23.427197  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:23.427254  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:23.461484  165698 cri.go:89] found id: ""
	I0617 12:04:23.461511  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.461522  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:23.461532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:23.461600  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:23.501292  165698 cri.go:89] found id: ""
	I0617 12:04:23.501324  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.501334  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:23.501342  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:23.501407  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:23.537605  165698 cri.go:89] found id: ""
	I0617 12:04:23.537639  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.537649  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:23.537654  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:23.537727  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:23.576580  165698 cri.go:89] found id: ""
	I0617 12:04:23.576608  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.576616  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:23.576623  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:23.576685  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:23.613124  165698 cri.go:89] found id: ""
	I0617 12:04:23.613153  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.613161  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:23.613167  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:23.613216  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:23.648662  165698 cri.go:89] found id: ""
	I0617 12:04:23.648688  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.648695  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:23.648705  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:23.648717  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:23.661737  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:23.661762  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:23.732512  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:23.732531  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:23.732547  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:23.810165  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:23.810207  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:23.855099  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:23.855136  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:26.406038  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:26.422243  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:26.422323  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:26.460959  165698 cri.go:89] found id: ""
	I0617 12:04:26.460984  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.460994  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:26.461002  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:26.461078  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:26.498324  165698 cri.go:89] found id: ""
	I0617 12:04:26.498350  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.498362  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:26.498370  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:26.498435  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:26.535299  165698 cri.go:89] found id: ""
	I0617 12:04:26.535335  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.535346  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:26.535354  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:26.535417  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:26.574623  165698 cri.go:89] found id: ""
	I0617 12:04:26.574657  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.574668  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:26.574677  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:26.574738  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:26.611576  165698 cri.go:89] found id: ""
	I0617 12:04:26.611607  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.611615  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:26.611621  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:26.611672  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:26.645664  165698 cri.go:89] found id: ""
	I0617 12:04:26.645692  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.645700  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:26.645706  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:26.645755  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:26.679442  165698 cri.go:89] found id: ""
	I0617 12:04:26.679477  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.679488  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:26.679495  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:26.679544  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:26.713512  165698 cri.go:89] found id: ""
	I0617 12:04:26.713543  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.713551  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:26.713563  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:26.713584  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:26.770823  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:26.770853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:26.784829  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:26.784858  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:26.868457  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:26.868480  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:26.868498  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:26.948522  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:26.948561  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:23.725626  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:26.224874  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:24.830639  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:26.836648  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:28.835682  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:31.335891  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:29.490891  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:29.504202  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:29.504273  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:29.544091  165698 cri.go:89] found id: ""
	I0617 12:04:29.544125  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.544137  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:29.544145  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:29.544203  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:29.581645  165698 cri.go:89] found id: ""
	I0617 12:04:29.581670  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.581679  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:29.581685  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:29.581736  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:29.621410  165698 cri.go:89] found id: ""
	I0617 12:04:29.621437  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.621447  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:29.621455  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:29.621522  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:29.659619  165698 cri.go:89] found id: ""
	I0617 12:04:29.659645  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.659654  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:29.659659  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:29.659718  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:29.698822  165698 cri.go:89] found id: ""
	I0617 12:04:29.698851  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.698859  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:29.698865  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:29.698957  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:29.741648  165698 cri.go:89] found id: ""
	I0617 12:04:29.741673  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.741680  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:29.741686  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:29.741752  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:29.777908  165698 cri.go:89] found id: ""
	I0617 12:04:29.777933  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.777941  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:29.777947  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:29.778013  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:29.812290  165698 cri.go:89] found id: ""
	I0617 12:04:29.812318  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.812328  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:29.812340  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:29.812357  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:29.857527  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:29.857552  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:29.916734  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:29.916776  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:29.930988  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:29.931013  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:30.006055  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:30.006080  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:30.006098  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:32.586549  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:32.600139  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:32.600262  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:32.641527  165698 cri.go:89] found id: ""
	I0617 12:04:32.641554  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.641570  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:32.641579  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:32.641635  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:32.687945  165698 cri.go:89] found id: ""
	I0617 12:04:32.687972  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.687981  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:32.687996  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:32.688068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:32.725586  165698 cri.go:89] found id: ""
	I0617 12:04:32.725618  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.725629  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:32.725639  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:32.725696  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:32.764042  165698 cri.go:89] found id: ""
	I0617 12:04:32.764090  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.764107  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:32.764115  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:32.764183  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:32.800132  165698 cri.go:89] found id: ""
	I0617 12:04:32.800167  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.800180  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:32.800189  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:32.800256  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:32.840313  165698 cri.go:89] found id: ""
	I0617 12:04:32.840348  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.840359  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:32.840367  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:32.840434  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:32.878041  165698 cri.go:89] found id: ""
	I0617 12:04:32.878067  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.878076  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:32.878082  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:32.878134  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:32.913904  165698 cri.go:89] found id: ""
	I0617 12:04:32.913939  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.913950  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:32.913961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:32.913974  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:04:28.725534  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:31.224885  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:29.330706  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:31.331989  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:33.337062  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:35.834807  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	W0617 12:04:32.987900  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:32.987929  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:32.987947  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:33.060919  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:33.060961  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:33.102602  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:33.102629  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:33.154112  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:33.154161  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:35.669336  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:35.682819  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:35.682907  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:35.717542  165698 cri.go:89] found id: ""
	I0617 12:04:35.717571  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.717579  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:35.717586  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:35.717646  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:35.754454  165698 cri.go:89] found id: ""
	I0617 12:04:35.754483  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.754495  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:35.754503  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:35.754566  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:35.791198  165698 cri.go:89] found id: ""
	I0617 12:04:35.791227  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.791237  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:35.791246  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:35.791309  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:35.826858  165698 cri.go:89] found id: ""
	I0617 12:04:35.826892  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.826903  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:35.826911  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:35.826974  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:35.866817  165698 cri.go:89] found id: ""
	I0617 12:04:35.866845  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.866853  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:35.866861  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:35.866909  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:35.918340  165698 cri.go:89] found id: ""
	I0617 12:04:35.918377  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.918388  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:35.918397  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:35.918466  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:35.960734  165698 cri.go:89] found id: ""
	I0617 12:04:35.960764  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.960774  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:35.960779  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:35.960841  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:36.002392  165698 cri.go:89] found id: ""
	I0617 12:04:36.002426  165698 logs.go:276] 0 containers: []
	W0617 12:04:36.002437  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:36.002449  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:36.002465  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:36.055130  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:36.055163  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:36.069181  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:36.069209  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:36.146078  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:36.146105  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:36.146120  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:36.223763  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:36.223797  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:33.723759  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:35.725954  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:38.225200  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:33.833990  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:36.332152  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:38.332570  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:37.836765  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:40.334594  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:42.336958  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:38.767375  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:38.781301  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:38.781357  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:38.821364  165698 cri.go:89] found id: ""
	I0617 12:04:38.821390  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.821400  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:38.821409  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:38.821472  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:38.860727  165698 cri.go:89] found id: ""
	I0617 12:04:38.860784  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.860796  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:38.860803  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:38.860868  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:38.902932  165698 cri.go:89] found id: ""
	I0617 12:04:38.902968  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.902992  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:38.902999  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:38.903088  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:38.940531  165698 cri.go:89] found id: ""
	I0617 12:04:38.940564  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.940576  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:38.940584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:38.940649  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:38.975751  165698 cri.go:89] found id: ""
	I0617 12:04:38.975792  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.975827  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:38.975835  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:38.975908  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:39.011156  165698 cri.go:89] found id: ""
	I0617 12:04:39.011196  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.011206  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:39.011213  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:39.011269  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:39.049266  165698 cri.go:89] found id: ""
	I0617 12:04:39.049301  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.049312  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:39.049320  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:39.049373  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:39.089392  165698 cri.go:89] found id: ""
	I0617 12:04:39.089425  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.089434  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:39.089444  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:39.089459  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:39.166585  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:39.166607  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:39.166619  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:39.241910  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:39.241950  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:39.287751  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:39.287782  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:39.342226  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:39.342259  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:41.857327  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:41.871379  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:41.871446  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:41.907435  165698 cri.go:89] found id: ""
	I0617 12:04:41.907472  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.907483  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:41.907492  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:41.907542  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:41.941684  165698 cri.go:89] found id: ""
	I0617 12:04:41.941725  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.941737  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:41.941745  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:41.941819  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:41.977359  165698 cri.go:89] found id: ""
	I0617 12:04:41.977395  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.977407  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:41.977415  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:41.977478  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:42.015689  165698 cri.go:89] found id: ""
	I0617 12:04:42.015723  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.015734  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:42.015742  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:42.015803  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:42.050600  165698 cri.go:89] found id: ""
	I0617 12:04:42.050626  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.050637  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:42.050645  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:42.050707  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:42.088174  165698 cri.go:89] found id: ""
	I0617 12:04:42.088201  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.088212  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:42.088221  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:42.088290  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:42.127335  165698 cri.go:89] found id: ""
	I0617 12:04:42.127364  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.127375  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:42.127384  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:42.127443  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:42.163435  165698 cri.go:89] found id: ""
	I0617 12:04:42.163481  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.163492  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:42.163505  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:42.163527  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:42.233233  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:42.233262  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:42.233280  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:42.311695  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:42.311741  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:42.378134  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:42.378163  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:42.439614  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:42.439647  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:40.726373  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:43.225144  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:40.336291  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:42.831220  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:44.835811  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:47.335772  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:44.953738  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:44.967822  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:44.967884  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:45.004583  165698 cri.go:89] found id: ""
	I0617 12:04:45.004687  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.004732  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:45.004741  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:45.004797  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:45.038912  165698 cri.go:89] found id: ""
	I0617 12:04:45.038939  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.038949  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:45.038957  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:45.039026  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:45.073594  165698 cri.go:89] found id: ""
	I0617 12:04:45.073620  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.073628  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:45.073634  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:45.073684  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:45.108225  165698 cri.go:89] found id: ""
	I0617 12:04:45.108253  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.108261  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:45.108267  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:45.108317  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:45.139522  165698 cri.go:89] found id: ""
	I0617 12:04:45.139545  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.139553  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:45.139559  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:45.139609  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:45.173705  165698 cri.go:89] found id: ""
	I0617 12:04:45.173735  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.173745  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:45.173752  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:45.173813  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:45.206448  165698 cri.go:89] found id: ""
	I0617 12:04:45.206477  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.206486  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:45.206493  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:45.206551  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:45.242925  165698 cri.go:89] found id: ""
	I0617 12:04:45.242952  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.242962  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:45.242981  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:45.242998  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:45.294669  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:45.294700  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:45.307642  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:45.307670  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:45.381764  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:45.381788  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:45.381805  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:45.469022  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:45.469056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:45.724236  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:48.225656  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:45.332888  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:47.832326  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:49.337260  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:51.338718  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:48.014169  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:48.029895  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:48.029984  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:48.086421  165698 cri.go:89] found id: ""
	I0617 12:04:48.086456  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.086468  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:48.086477  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:48.086554  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:48.135673  165698 cri.go:89] found id: ""
	I0617 12:04:48.135705  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.135713  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:48.135733  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:48.135808  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:48.184330  165698 cri.go:89] found id: ""
	I0617 12:04:48.184353  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.184362  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:48.184368  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:48.184418  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:48.221064  165698 cri.go:89] found id: ""
	I0617 12:04:48.221095  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.221103  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:48.221112  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:48.221175  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:48.264464  165698 cri.go:89] found id: ""
	I0617 12:04:48.264495  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.264502  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:48.264508  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:48.264561  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:48.302144  165698 cri.go:89] found id: ""
	I0617 12:04:48.302180  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.302191  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:48.302199  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:48.302263  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:48.345431  165698 cri.go:89] found id: ""
	I0617 12:04:48.345458  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.345465  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:48.345472  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:48.345539  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:48.383390  165698 cri.go:89] found id: ""
	I0617 12:04:48.383423  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.383434  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:48.383447  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:48.383478  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:48.422328  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:48.422356  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:48.473698  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:48.473735  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:48.488399  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:48.488429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:48.566851  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:48.566871  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:48.566884  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:51.149626  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:51.162855  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:51.162926  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:51.199056  165698 cri.go:89] found id: ""
	I0617 12:04:51.199091  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.199102  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:51.199109  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:51.199172  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:51.238773  165698 cri.go:89] found id: ""
	I0617 12:04:51.238810  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.238821  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:51.238827  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:51.238883  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:51.279049  165698 cri.go:89] found id: ""
	I0617 12:04:51.279079  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.279092  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:51.279100  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:51.279166  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:51.324923  165698 cri.go:89] found id: ""
	I0617 12:04:51.324957  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.324969  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:51.324976  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:51.325028  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:51.363019  165698 cri.go:89] found id: ""
	I0617 12:04:51.363055  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.363068  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:51.363077  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:51.363142  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:51.399620  165698 cri.go:89] found id: ""
	I0617 12:04:51.399652  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.399661  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:51.399675  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:51.399758  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:51.434789  165698 cri.go:89] found id: ""
	I0617 12:04:51.434824  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.434836  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:51.434844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:51.434910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:51.470113  165698 cri.go:89] found id: ""
	I0617 12:04:51.470140  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.470149  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:51.470160  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:51.470176  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:51.526138  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:51.526173  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:51.539451  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:51.539491  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:51.613418  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:51.613437  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:51.613450  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:51.691971  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:51.692010  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:50.724405  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:52.725426  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:50.332363  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:52.332932  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:53.834955  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:56.334584  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:54.234514  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:54.249636  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:54.249724  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:54.283252  165698 cri.go:89] found id: ""
	I0617 12:04:54.283287  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.283300  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:54.283307  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:54.283367  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:54.319153  165698 cri.go:89] found id: ""
	I0617 12:04:54.319207  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.319218  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:54.319226  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:54.319290  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:54.361450  165698 cri.go:89] found id: ""
	I0617 12:04:54.361480  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.361491  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:54.361498  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:54.361562  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:54.397806  165698 cri.go:89] found id: ""
	I0617 12:04:54.397834  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.397843  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:54.397849  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:54.397899  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:54.447119  165698 cri.go:89] found id: ""
	I0617 12:04:54.447147  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.447155  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:54.447161  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:54.447211  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:54.489717  165698 cri.go:89] found id: ""
	I0617 12:04:54.489751  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.489760  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:54.489766  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:54.489830  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:54.532840  165698 cri.go:89] found id: ""
	I0617 12:04:54.532943  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.532975  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:54.532989  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:54.533100  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:54.568227  165698 cri.go:89] found id: ""
	I0617 12:04:54.568369  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.568391  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:54.568403  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:54.568420  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:54.583140  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:54.583174  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:54.661258  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:54.661281  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:54.661296  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:54.750472  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:54.750511  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:54.797438  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:54.797467  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:57.349800  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:57.364820  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:57.364879  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:57.405065  165698 cri.go:89] found id: ""
	I0617 12:04:57.405093  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.405101  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:57.405106  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:57.405153  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:57.445707  165698 cri.go:89] found id: ""
	I0617 12:04:57.445741  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.445752  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:57.445760  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:57.445829  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:57.486911  165698 cri.go:89] found id: ""
	I0617 12:04:57.486940  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.486948  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:57.486955  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:57.487014  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:57.521218  165698 cri.go:89] found id: ""
	I0617 12:04:57.521254  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.521266  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:57.521274  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:57.521342  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:57.555762  165698 cri.go:89] found id: ""
	I0617 12:04:57.555794  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.555803  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:57.555808  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:57.555863  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:57.591914  165698 cri.go:89] found id: ""
	I0617 12:04:57.591945  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.591956  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:57.591971  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:57.592037  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:57.626435  165698 cri.go:89] found id: ""
	I0617 12:04:57.626463  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.626471  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:57.626477  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:57.626527  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:57.665088  165698 cri.go:89] found id: ""
	I0617 12:04:57.665118  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.665126  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:57.665137  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:57.665152  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:57.716284  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:57.716316  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:57.730179  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:57.730204  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:57.808904  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:57.808933  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:57.808954  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:57.894499  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:57.894530  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:55.224507  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:57.224583  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:54.831112  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:56.832477  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:58.334640  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:00.335137  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:00.435957  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:00.450812  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:00.450890  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:00.491404  165698 cri.go:89] found id: ""
	I0617 12:05:00.491432  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.491440  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:00.491446  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:00.491523  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:00.526711  165698 cri.go:89] found id: ""
	I0617 12:05:00.526739  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.526747  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:00.526753  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:00.526817  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:00.562202  165698 cri.go:89] found id: ""
	I0617 12:05:00.562236  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.562246  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:00.562255  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:00.562323  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:00.602754  165698 cri.go:89] found id: ""
	I0617 12:05:00.602790  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.602802  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:00.602811  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:00.602877  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:00.645666  165698 cri.go:89] found id: ""
	I0617 12:05:00.645703  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.645715  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:00.645723  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:00.645788  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:00.684649  165698 cri.go:89] found id: ""
	I0617 12:05:00.684685  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.684694  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:00.684701  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:00.684784  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:00.727139  165698 cri.go:89] found id: ""
	I0617 12:05:00.727160  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.727167  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:00.727173  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:00.727238  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:00.764401  165698 cri.go:89] found id: ""
	I0617 12:05:00.764433  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.764444  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:00.764455  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:00.764474  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:00.777301  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:00.777322  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:00.849752  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:00.849778  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:00.849795  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:00.930220  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:00.930266  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:00.970076  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:00.970116  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:59.226429  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:01.725079  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:59.337081  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:01.834932  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:02.834132  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:05.334066  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:07.335366  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:03.526070  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:03.541150  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:03.541229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:03.584416  165698 cri.go:89] found id: ""
	I0617 12:05:03.584451  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.584463  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:03.584472  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:03.584535  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:03.623509  165698 cri.go:89] found id: ""
	I0617 12:05:03.623543  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.623552  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:03.623558  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:03.623611  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:03.661729  165698 cri.go:89] found id: ""
	I0617 12:05:03.661765  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.661778  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:03.661787  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:03.661852  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:03.702952  165698 cri.go:89] found id: ""
	I0617 12:05:03.702985  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.703008  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:03.703033  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:03.703100  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:03.746534  165698 cri.go:89] found id: ""
	I0617 12:05:03.746570  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.746578  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:03.746584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:03.746648  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:03.784472  165698 cri.go:89] found id: ""
	I0617 12:05:03.784506  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.784515  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:03.784522  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:03.784580  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:03.821033  165698 cri.go:89] found id: ""
	I0617 12:05:03.821066  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.821077  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:03.821085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:03.821146  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:03.859438  165698 cri.go:89] found id: ""
	I0617 12:05:03.859474  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.859487  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:03.859497  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:03.859513  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:03.940723  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:03.940770  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:03.986267  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:03.986303  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:04.037999  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:04.038039  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:04.051382  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:04.051415  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:04.121593  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:06.622475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:06.636761  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:06.636842  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:06.673954  165698 cri.go:89] found id: ""
	I0617 12:05:06.673995  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.674007  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:06.674015  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:06.674084  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:06.708006  165698 cri.go:89] found id: ""
	I0617 12:05:06.708037  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.708047  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:06.708055  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:06.708124  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:06.743819  165698 cri.go:89] found id: ""
	I0617 12:05:06.743852  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.743864  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:06.743872  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:06.743934  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:06.781429  165698 cri.go:89] found id: ""
	I0617 12:05:06.781457  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.781465  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:06.781473  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:06.781540  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:06.818404  165698 cri.go:89] found id: ""
	I0617 12:05:06.818435  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.818447  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:06.818456  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:06.818516  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:06.857880  165698 cri.go:89] found id: ""
	I0617 12:05:06.857913  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.857924  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:06.857933  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:06.857993  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:06.893010  165698 cri.go:89] found id: ""
	I0617 12:05:06.893050  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.893059  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:06.893065  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:06.893118  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:06.926302  165698 cri.go:89] found id: ""
	I0617 12:05:06.926336  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.926347  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:06.926360  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:06.926378  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:06.997173  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:06.997197  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:06.997215  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:07.082843  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:07.082885  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:07.122542  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:07.122572  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:07.177033  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:07.177070  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:03.725338  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:06.225466  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:04.331639  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:06.331988  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:08.332139  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:09.835119  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:12.333346  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:09.693217  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:09.707043  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:09.707110  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:09.742892  165698 cri.go:89] found id: ""
	I0617 12:05:09.742918  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.742927  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:09.742933  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:09.742982  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:09.776938  165698 cri.go:89] found id: ""
	I0617 12:05:09.776969  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.776976  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:09.776982  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:09.777030  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:09.813613  165698 cri.go:89] found id: ""
	I0617 12:05:09.813643  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.813651  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:09.813658  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:09.813705  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:09.855483  165698 cri.go:89] found id: ""
	I0617 12:05:09.855516  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.855525  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:09.855532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:09.855596  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:09.890808  165698 cri.go:89] found id: ""
	I0617 12:05:09.890844  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.890854  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:09.890862  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:09.890930  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:09.927656  165698 cri.go:89] found id: ""
	I0617 12:05:09.927684  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.927693  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:09.927703  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:09.927758  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:09.968130  165698 cri.go:89] found id: ""
	I0617 12:05:09.968163  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.968174  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:09.968183  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:09.968239  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:10.010197  165698 cri.go:89] found id: ""
	I0617 12:05:10.010220  165698 logs.go:276] 0 containers: []
	W0617 12:05:10.010228  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:10.010239  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:10.010252  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:10.063999  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:10.064040  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:10.078837  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:10.078873  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:10.155932  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:10.155954  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:10.155967  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:10.232859  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:10.232901  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:12.772943  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:12.787936  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:12.788024  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:12.828457  165698 cri.go:89] found id: ""
	I0617 12:05:12.828483  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.828491  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:12.828498  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:12.828562  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:12.862265  165698 cri.go:89] found id: ""
	I0617 12:05:12.862296  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.862306  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:12.862313  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:12.862372  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:12.899673  165698 cri.go:89] found id: ""
	I0617 12:05:12.899698  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.899706  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:12.899712  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:12.899759  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:12.943132  165698 cri.go:89] found id: ""
	I0617 12:05:12.943161  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.943169  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:12.943175  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:12.943227  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:08.724369  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:10.725166  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:13.224799  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:10.333769  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:12.832493  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:14.336437  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:16.835155  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:12.985651  165698 cri.go:89] found id: ""
	I0617 12:05:12.985677  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.985685  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:12.985691  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:12.985747  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:13.021484  165698 cri.go:89] found id: ""
	I0617 12:05:13.021508  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.021516  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:13.021522  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:13.021569  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:13.060658  165698 cri.go:89] found id: ""
	I0617 12:05:13.060689  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.060705  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:13.060713  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:13.060782  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:13.106008  165698 cri.go:89] found id: ""
	I0617 12:05:13.106041  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.106052  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:13.106066  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:13.106083  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:13.160199  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:13.160231  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:13.173767  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:13.173804  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:13.245358  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:13.245383  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:13.245399  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:13.323046  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:13.323085  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:15.872024  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:15.885550  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:15.885624  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:15.920303  165698 cri.go:89] found id: ""
	I0617 12:05:15.920332  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.920344  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:15.920358  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:15.920423  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:15.955132  165698 cri.go:89] found id: ""
	I0617 12:05:15.955158  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.955166  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:15.955172  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:15.955220  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:15.992995  165698 cri.go:89] found id: ""
	I0617 12:05:15.993034  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.993053  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:15.993060  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:15.993127  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:16.032603  165698 cri.go:89] found id: ""
	I0617 12:05:16.032638  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.032650  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:16.032658  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:16.032716  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:16.071770  165698 cri.go:89] found id: ""
	I0617 12:05:16.071804  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.071816  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:16.071824  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:16.071899  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:16.106172  165698 cri.go:89] found id: ""
	I0617 12:05:16.106206  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.106218  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:16.106226  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:16.106292  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:16.139406  165698 cri.go:89] found id: ""
	I0617 12:05:16.139436  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.139443  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:16.139449  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:16.139517  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:16.174513  165698 cri.go:89] found id: ""
	I0617 12:05:16.174554  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.174565  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:16.174580  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:16.174597  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:16.240912  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:16.240940  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:16.240958  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:16.323853  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:16.323891  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:16.372632  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:16.372659  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:16.428367  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:16.428406  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:15.224918  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:17.725226  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:15.332512  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:17.833710  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:19.334324  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:21.334654  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:18.943551  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:18.957394  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:18.957490  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:18.991967  165698 cri.go:89] found id: ""
	I0617 12:05:18.992006  165698 logs.go:276] 0 containers: []
	W0617 12:05:18.992017  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:18.992027  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:18.992092  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:19.025732  165698 cri.go:89] found id: ""
	I0617 12:05:19.025763  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.025775  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:19.025783  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:19.025856  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:19.061786  165698 cri.go:89] found id: ""
	I0617 12:05:19.061820  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.061830  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:19.061838  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:19.061906  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:19.098819  165698 cri.go:89] found id: ""
	I0617 12:05:19.098856  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.098868  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:19.098876  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:19.098947  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:19.139840  165698 cri.go:89] found id: ""
	I0617 12:05:19.139877  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.139886  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:19.139894  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:19.139965  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:19.176546  165698 cri.go:89] found id: ""
	I0617 12:05:19.176578  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.176590  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:19.176598  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:19.176671  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:19.209948  165698 cri.go:89] found id: ""
	I0617 12:05:19.209985  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.209997  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:19.210005  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:19.210087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:19.246751  165698 cri.go:89] found id: ""
	I0617 12:05:19.246788  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.246799  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:19.246812  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:19.246830  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:19.322272  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:19.322316  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:19.370147  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:19.370187  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:19.422699  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:19.422749  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:19.437255  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:19.437284  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:19.510077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:22.010840  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:22.024791  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:22.024879  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:22.060618  165698 cri.go:89] found id: ""
	I0617 12:05:22.060658  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.060667  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:22.060674  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:22.060742  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:22.100228  165698 cri.go:89] found id: ""
	I0617 12:05:22.100259  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.100268  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:22.100274  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:22.100343  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:22.135629  165698 cri.go:89] found id: ""
	I0617 12:05:22.135657  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.135665  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:22.135671  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:22.135730  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:22.186027  165698 cri.go:89] found id: ""
	I0617 12:05:22.186064  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.186076  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:22.186085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:22.186148  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:22.220991  165698 cri.go:89] found id: ""
	I0617 12:05:22.221019  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.221029  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:22.221037  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:22.221104  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:22.266306  165698 cri.go:89] found id: ""
	I0617 12:05:22.266337  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.266348  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:22.266357  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:22.266414  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:22.303070  165698 cri.go:89] found id: ""
	I0617 12:05:22.303104  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.303116  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:22.303124  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:22.303190  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:22.339792  165698 cri.go:89] found id: ""
	I0617 12:05:22.339819  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.339829  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:22.339840  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:22.339856  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:22.422360  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:22.422397  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:22.465744  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:22.465777  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:22.516199  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:22.516232  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:22.529961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:22.529983  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:22.601519  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:20.225369  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:22.226699  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:19.834562  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:21.837426  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:23.336540  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:25.835706  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:25.102655  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:25.116893  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:25.116959  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:25.156370  165698 cri.go:89] found id: ""
	I0617 12:05:25.156396  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.156404  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:25.156410  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:25.156468  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:25.193123  165698 cri.go:89] found id: ""
	I0617 12:05:25.193199  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.193221  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:25.193234  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:25.193301  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:25.232182  165698 cri.go:89] found id: ""
	I0617 12:05:25.232209  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.232219  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:25.232227  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:25.232285  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:25.266599  165698 cri.go:89] found id: ""
	I0617 12:05:25.266630  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.266639  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:25.266645  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:25.266701  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:25.308732  165698 cri.go:89] found id: ""
	I0617 12:05:25.308762  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.308770  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:25.308776  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:25.308836  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:25.348817  165698 cri.go:89] found id: ""
	I0617 12:05:25.348858  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.348871  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:25.348879  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:25.348946  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:25.389343  165698 cri.go:89] found id: ""
	I0617 12:05:25.389375  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.389387  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:25.389393  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:25.389452  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:25.427014  165698 cri.go:89] found id: ""
	I0617 12:05:25.427043  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.427055  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:25.427067  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:25.427083  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:25.441361  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:25.441390  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:25.518967  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:25.518993  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:25.519006  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:25.601411  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:25.601450  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:25.651636  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:25.651674  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:24.725515  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:27.223821  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:24.333548  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:26.832428  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:27.836661  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:30.334313  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:32.336489  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:28.202148  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:28.215710  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:28.215792  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:28.254961  165698 cri.go:89] found id: ""
	I0617 12:05:28.254986  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.255000  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:28.255007  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:28.255061  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:28.292574  165698 cri.go:89] found id: ""
	I0617 12:05:28.292606  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.292614  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:28.292620  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:28.292683  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:28.329036  165698 cri.go:89] found id: ""
	I0617 12:05:28.329067  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.329077  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:28.329085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:28.329152  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:28.366171  165698 cri.go:89] found id: ""
	I0617 12:05:28.366197  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.366206  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:28.366212  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:28.366273  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:28.401380  165698 cri.go:89] found id: ""
	I0617 12:05:28.401407  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.401417  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:28.401424  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:28.401486  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:28.438767  165698 cri.go:89] found id: ""
	I0617 12:05:28.438798  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.438810  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:28.438817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:28.438876  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:28.472706  165698 cri.go:89] found id: ""
	I0617 12:05:28.472761  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.472772  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:28.472779  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:28.472829  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:28.509525  165698 cri.go:89] found id: ""
	I0617 12:05:28.509548  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.509556  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:28.509565  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:28.509577  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:28.606008  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:28.606059  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:28.665846  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:28.665874  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:28.721599  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:28.721627  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:28.735040  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:28.735062  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:28.811954  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:31.312554  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:31.326825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:31.326905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:31.364862  165698 cri.go:89] found id: ""
	I0617 12:05:31.364891  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.364902  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:31.364910  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:31.364976  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:31.396979  165698 cri.go:89] found id: ""
	I0617 12:05:31.397013  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.397027  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:31.397035  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:31.397098  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:31.430617  165698 cri.go:89] found id: ""
	I0617 12:05:31.430647  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.430657  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:31.430665  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:31.430728  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:31.462308  165698 cri.go:89] found id: ""
	I0617 12:05:31.462338  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.462345  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:31.462350  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:31.462399  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:31.495406  165698 cri.go:89] found id: ""
	I0617 12:05:31.495435  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.495444  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:31.495452  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:31.495553  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:31.538702  165698 cri.go:89] found id: ""
	I0617 12:05:31.538729  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.538739  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:31.538750  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:31.538813  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:31.572637  165698 cri.go:89] found id: ""
	I0617 12:05:31.572666  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.572677  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:31.572685  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:31.572745  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:31.609307  165698 cri.go:89] found id: ""
	I0617 12:05:31.609341  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.609352  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:31.609364  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:31.609380  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:31.622445  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:31.622471  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:31.699170  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:31.699191  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:31.699209  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:31.775115  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:31.775156  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:31.815836  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:31.815866  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:29.225028  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:31.727009  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:29.333400  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:31.834599  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:34.836093  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:37.335140  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:34.372097  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:34.393542  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:34.393607  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:34.437265  165698 cri.go:89] found id: ""
	I0617 12:05:34.437294  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.437305  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:34.437314  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:34.437382  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:34.474566  165698 cri.go:89] found id: ""
	I0617 12:05:34.474596  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.474609  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:34.474617  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:34.474680  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:34.510943  165698 cri.go:89] found id: ""
	I0617 12:05:34.510975  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.510986  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:34.511000  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:34.511072  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:34.548124  165698 cri.go:89] found id: ""
	I0617 12:05:34.548160  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.548172  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:34.548179  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:34.548241  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:34.582428  165698 cri.go:89] found id: ""
	I0617 12:05:34.582453  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.582460  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:34.582467  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:34.582514  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:34.616895  165698 cri.go:89] found id: ""
	I0617 12:05:34.616937  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.616950  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:34.616957  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:34.617019  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:34.656116  165698 cri.go:89] found id: ""
	I0617 12:05:34.656144  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.656155  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:34.656162  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:34.656226  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:34.695649  165698 cri.go:89] found id: ""
	I0617 12:05:34.695680  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.695692  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:34.695705  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:34.695722  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:34.747910  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:34.747956  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:34.762177  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:34.762206  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:34.840395  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:34.840423  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:34.840440  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:34.922962  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:34.923002  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:37.464659  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:37.480351  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:37.480416  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:37.521249  165698 cri.go:89] found id: ""
	I0617 12:05:37.521279  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.521286  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:37.521293  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:37.521340  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:37.561053  165698 cri.go:89] found id: ""
	I0617 12:05:37.561079  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.561087  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:37.561094  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:37.561151  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:37.599019  165698 cri.go:89] found id: ""
	I0617 12:05:37.599057  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.599066  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:37.599074  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:37.599134  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:37.638276  165698 cri.go:89] found id: ""
	I0617 12:05:37.638304  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.638315  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:37.638323  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:37.638389  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:37.677819  165698 cri.go:89] found id: ""
	I0617 12:05:37.677845  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.677853  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:37.677859  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:37.677910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:37.715850  165698 cri.go:89] found id: ""
	I0617 12:05:37.715877  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.715888  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:37.715897  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:37.715962  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:37.755533  165698 cri.go:89] found id: ""
	I0617 12:05:37.755563  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.755570  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:37.755576  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:37.755636  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:37.791826  165698 cri.go:89] found id: ""
	I0617 12:05:37.791850  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.791859  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:37.791872  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:37.791888  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:37.844824  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:37.844853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:37.860933  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:37.860963  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:37.926497  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:37.926519  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:37.926535  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:34.224078  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:36.224464  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:38.224753  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:34.333888  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:36.832374  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:39.336299  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:41.834494  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:38.003814  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:38.003853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:40.546386  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:40.560818  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:40.560896  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:40.596737  165698 cri.go:89] found id: ""
	I0617 12:05:40.596777  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.596784  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:40.596791  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:40.596844  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:40.631518  165698 cri.go:89] found id: ""
	I0617 12:05:40.631556  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.631570  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:40.631611  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:40.631683  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:40.674962  165698 cri.go:89] found id: ""
	I0617 12:05:40.674997  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.675006  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:40.675012  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:40.675064  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:40.716181  165698 cri.go:89] found id: ""
	I0617 12:05:40.716210  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.716218  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:40.716226  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:40.716286  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:40.756312  165698 cri.go:89] found id: ""
	I0617 12:05:40.756339  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.756348  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:40.756353  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:40.756406  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:40.791678  165698 cri.go:89] found id: ""
	I0617 12:05:40.791733  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.791750  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:40.791759  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:40.791830  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:40.830717  165698 cri.go:89] found id: ""
	I0617 12:05:40.830754  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.830766  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:40.830774  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:40.830854  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:40.868139  165698 cri.go:89] found id: ""
	I0617 12:05:40.868169  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.868178  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:40.868198  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:40.868224  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:40.920319  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:40.920353  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:40.934948  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:40.934974  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:41.005349  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:41.005371  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:41.005388  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:41.086783  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:41.086842  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:40.724767  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:43.223836  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:38.834031  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:41.331190  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:43.332593  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:44.334114  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:46.334595  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:43.625515  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:43.638942  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:43.639019  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:43.673703  165698 cri.go:89] found id: ""
	I0617 12:05:43.673735  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.673747  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:43.673756  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:43.673822  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:43.709417  165698 cri.go:89] found id: ""
	I0617 12:05:43.709449  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.709460  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:43.709468  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:43.709529  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:43.742335  165698 cri.go:89] found id: ""
	I0617 12:05:43.742368  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.742379  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:43.742389  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:43.742449  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:43.779112  165698 cri.go:89] found id: ""
	I0617 12:05:43.779141  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.779150  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:43.779155  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:43.779219  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:43.813362  165698 cri.go:89] found id: ""
	I0617 12:05:43.813397  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.813406  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:43.813414  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:43.813464  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:43.850456  165698 cri.go:89] found id: ""
	I0617 12:05:43.850484  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.850493  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:43.850499  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:43.850547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:43.884527  165698 cri.go:89] found id: ""
	I0617 12:05:43.884555  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.884564  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:43.884571  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:43.884632  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:43.921440  165698 cri.go:89] found id: ""
	I0617 12:05:43.921476  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.921488  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:43.921501  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:43.921517  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:43.973687  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:43.973727  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:43.988114  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:43.988143  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:44.055084  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:44.055119  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:44.055138  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:44.134628  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:44.134665  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:46.677852  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:46.690688  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:46.690747  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:46.724055  165698 cri.go:89] found id: ""
	I0617 12:05:46.724090  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.724101  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:46.724110  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:46.724171  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:46.759119  165698 cri.go:89] found id: ""
	I0617 12:05:46.759150  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.759161  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:46.759169  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:46.759227  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:46.796392  165698 cri.go:89] found id: ""
	I0617 12:05:46.796424  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.796435  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:46.796442  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:46.796504  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:46.831727  165698 cri.go:89] found id: ""
	I0617 12:05:46.831761  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.831770  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:46.831777  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:46.831845  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:46.866662  165698 cri.go:89] found id: ""
	I0617 12:05:46.866693  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.866702  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:46.866708  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:46.866757  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:46.905045  165698 cri.go:89] found id: ""
	I0617 12:05:46.905070  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.905078  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:46.905084  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:46.905130  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:46.940879  165698 cri.go:89] found id: ""
	I0617 12:05:46.940907  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.940915  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:46.940926  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:46.940974  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:46.977247  165698 cri.go:89] found id: ""
	I0617 12:05:46.977290  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.977301  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:46.977314  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:46.977331  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:47.046094  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:47.046116  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:47.046133  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:47.122994  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:47.123038  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:47.166273  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:47.166313  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:47.221392  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:47.221429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:45.228807  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:47.723584  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:45.834805  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:48.333121  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:48.335758  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:50.833989  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:49.739113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:49.752880  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:49.753004  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:49.791177  165698 cri.go:89] found id: ""
	I0617 12:05:49.791218  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.791242  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:49.791251  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:49.791322  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:49.831602  165698 cri.go:89] found id: ""
	I0617 12:05:49.831633  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.831644  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:49.831652  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:49.831719  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:49.870962  165698 cri.go:89] found id: ""
	I0617 12:05:49.870998  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.871011  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:49.871019  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:49.871092  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:49.917197  165698 cri.go:89] found id: ""
	I0617 12:05:49.917232  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.917243  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:49.917252  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:49.917320  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:49.952997  165698 cri.go:89] found id: ""
	I0617 12:05:49.953034  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.953047  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:49.953056  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:49.953114  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:50.001925  165698 cri.go:89] found id: ""
	I0617 12:05:50.001965  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.001977  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:50.001986  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:50.002059  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:50.043374  165698 cri.go:89] found id: ""
	I0617 12:05:50.043403  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.043412  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:50.043419  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:50.043496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:50.082974  165698 cri.go:89] found id: ""
	I0617 12:05:50.083009  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.083020  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:50.083029  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:50.083043  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:50.134116  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:50.134159  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:50.148478  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:50.148511  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:50.227254  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:50.227276  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:50.227288  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:50.305920  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:50.305960  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:52.848811  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:52.862612  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:52.862669  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:52.896379  165698 cri.go:89] found id: ""
	I0617 12:05:52.896410  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.896421  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:52.896429  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:52.896488  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:52.933387  165698 cri.go:89] found id: ""
	I0617 12:05:52.933422  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.933432  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:52.933439  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:52.933501  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:52.971055  165698 cri.go:89] found id: ""
	I0617 12:05:52.971091  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.971102  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:52.971110  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:52.971168  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:49.724816  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:52.224660  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:50.334092  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:52.831686  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:52.835473  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:55.334017  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:57.334957  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:53.003815  165698 cri.go:89] found id: ""
	I0617 12:05:53.003846  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.003857  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:53.003864  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:53.003927  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:53.039133  165698 cri.go:89] found id: ""
	I0617 12:05:53.039161  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.039169  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:53.039175  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:53.039229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:53.077703  165698 cri.go:89] found id: ""
	I0617 12:05:53.077756  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.077773  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:53.077780  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:53.077831  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:53.119187  165698 cri.go:89] found id: ""
	I0617 12:05:53.119216  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.119223  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:53.119230  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:53.119287  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:53.154423  165698 cri.go:89] found id: ""
	I0617 12:05:53.154457  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.154467  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:53.154480  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:53.154496  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:53.202745  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:53.202778  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:53.216510  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:53.216537  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:53.295687  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:53.295712  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:53.295732  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:53.375064  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:53.375095  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:55.915113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:55.929155  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:55.929239  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:55.964589  165698 cri.go:89] found id: ""
	I0617 12:05:55.964625  165698 logs.go:276] 0 containers: []
	W0617 12:05:55.964634  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:55.964640  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:55.964702  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:56.003659  165698 cri.go:89] found id: ""
	I0617 12:05:56.003691  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.003701  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:56.003709  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:56.003778  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:56.039674  165698 cri.go:89] found id: ""
	I0617 12:05:56.039707  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.039717  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:56.039724  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:56.039786  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:56.077695  165698 cri.go:89] found id: ""
	I0617 12:05:56.077736  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.077748  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:56.077756  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:56.077826  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:56.116397  165698 cri.go:89] found id: ""
	I0617 12:05:56.116430  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.116442  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:56.116451  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:56.116512  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:56.152395  165698 cri.go:89] found id: ""
	I0617 12:05:56.152433  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.152445  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:56.152454  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:56.152513  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:56.189740  165698 cri.go:89] found id: ""
	I0617 12:05:56.189776  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.189788  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:56.189796  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:56.189866  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:56.228017  165698 cri.go:89] found id: ""
	I0617 12:05:56.228047  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.228055  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:56.228063  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:56.228076  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:56.279032  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:56.279079  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:56.294369  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:56.294394  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:56.369507  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:56.369535  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:56.369551  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:56.454797  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:56.454833  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:54.725303  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:56.726247  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:56.726280  165060 pod_ready.go:81] duration metric: took 4m0.008373114s for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	E0617 12:05:56.726291  165060 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0617 12:05:56.726298  165060 pod_ready.go:38] duration metric: took 4m3.608691328s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:05:56.726315  165060 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:05:56.726352  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:56.726411  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:56.784765  165060 cri.go:89] found id: "5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:05:56.784792  165060 cri.go:89] found id: ""
	I0617 12:05:56.784803  165060 logs.go:276] 1 containers: [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3]
	I0617 12:05:56.784865  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.791125  165060 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:56.791189  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:56.830691  165060 cri.go:89] found id: "fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:05:56.830715  165060 cri.go:89] found id: ""
	I0617 12:05:56.830725  165060 logs.go:276] 1 containers: [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9]
	I0617 12:05:56.830785  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.836214  165060 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:56.836282  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:56.875812  165060 cri.go:89] found id: "c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:05:56.875830  165060 cri.go:89] found id: ""
	I0617 12:05:56.875837  165060 logs.go:276] 1 containers: [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7]
	I0617 12:05:56.875891  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.880190  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:56.880247  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:56.925155  165060 cri.go:89] found id: "157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:05:56.925178  165060 cri.go:89] found id: ""
	I0617 12:05:56.925186  165060 logs.go:276] 1 containers: [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d]
	I0617 12:05:56.925231  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.930317  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:56.930384  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:56.972479  165060 cri.go:89] found id: "c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:05:56.972503  165060 cri.go:89] found id: ""
	I0617 12:05:56.972512  165060 logs.go:276] 1 containers: [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d]
	I0617 12:05:56.972559  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.977635  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:56.977696  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:57.012791  165060 cri.go:89] found id: "2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:05:57.012816  165060 cri.go:89] found id: ""
	I0617 12:05:57.012826  165060 logs.go:276] 1 containers: [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079]
	I0617 12:05:57.012882  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:57.016856  165060 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:57.016909  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:57.052111  165060 cri.go:89] found id: ""
	I0617 12:05:57.052146  165060 logs.go:276] 0 containers: []
	W0617 12:05:57.052156  165060 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:57.052163  165060 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:05:57.052211  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:05:57.094600  165060 cri.go:89] found id: "02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:05:57.094619  165060 cri.go:89] found id: "7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:05:57.094622  165060 cri.go:89] found id: ""
	I0617 12:05:57.094630  165060 logs.go:276] 2 containers: [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36]
	I0617 12:05:57.094700  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:57.099250  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:57.104252  165060 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:57.104281  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:57.162000  165060 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:57.162027  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:05:57.285448  165060 logs.go:123] Gathering logs for etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] ...
	I0617 12:05:57.285490  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:05:57.340781  165060 logs.go:123] Gathering logs for coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] ...
	I0617 12:05:57.340820  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:05:57.383507  165060 logs.go:123] Gathering logs for kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] ...
	I0617 12:05:57.383540  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:05:57.428747  165060 logs.go:123] Gathering logs for kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] ...
	I0617 12:05:57.428792  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:05:57.468739  165060 logs.go:123] Gathering logs for kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] ...
	I0617 12:05:57.468770  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:05:57.531317  165060 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:57.531355  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:58.063787  165060 logs.go:123] Gathering logs for container status ...
	I0617 12:05:58.063838  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:58.129384  165060 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:58.129416  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:58.144078  165060 logs.go:123] Gathering logs for kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] ...
	I0617 12:05:58.144152  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:05:58.189028  165060 logs.go:123] Gathering logs for storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] ...
	I0617 12:05:58.189068  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:05:58.227144  165060 logs.go:123] Gathering logs for storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] ...
	I0617 12:05:58.227178  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:05:54.838580  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:57.333884  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:59.836198  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:01.837155  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:58.995221  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:59.008481  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:59.008555  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:59.043854  165698 cri.go:89] found id: ""
	I0617 12:05:59.043887  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.043914  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:59.043935  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:59.044003  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:59.081488  165698 cri.go:89] found id: ""
	I0617 12:05:59.081522  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.081530  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:59.081537  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:59.081596  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:59.118193  165698 cri.go:89] found id: ""
	I0617 12:05:59.118222  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.118232  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:59.118240  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:59.118306  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:59.150286  165698 cri.go:89] found id: ""
	I0617 12:05:59.150315  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.150327  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:59.150335  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:59.150381  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:59.191426  165698 cri.go:89] found id: ""
	I0617 12:05:59.191450  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.191485  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:59.191493  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:59.191547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:59.224933  165698 cri.go:89] found id: ""
	I0617 12:05:59.224965  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.224974  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:59.224998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:59.225061  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:59.255929  165698 cri.go:89] found id: ""
	I0617 12:05:59.255956  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.255965  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:59.255971  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:59.256025  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:59.293072  165698 cri.go:89] found id: ""
	I0617 12:05:59.293097  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.293104  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:59.293114  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:59.293126  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:59.354240  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:59.354267  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:59.367715  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:59.367744  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:59.446352  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:59.446381  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:59.446396  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:59.528701  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:59.528738  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:02.071616  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:02.088050  165698 kubeadm.go:591] duration metric: took 4m3.493743262s to restartPrimaryControlPlane
	W0617 12:06:02.088159  165698 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0617 12:06:02.088194  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:06:02.552133  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:02.570136  165698 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:06:02.582299  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:06:02.594775  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:06:02.594809  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:06:02.594867  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:06:02.605875  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:06:02.605954  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:06:02.617780  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:06:02.628284  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:06:02.628359  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:06:02.639128  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:06:02.650079  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:06:02.650144  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:06:02.660879  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:06:02.671170  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:06:02.671249  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:06:02.682071  165698 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:06:02.753750  165698 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0617 12:06:02.753913  165698 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:06:02.897384  165698 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:06:02.897530  165698 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:06:02.897685  165698 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:06:03.079116  165698 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:06:00.764533  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:00.781564  165060 api_server.go:72] duration metric: took 4m14.875617542s to wait for apiserver process to appear ...
	I0617 12:06:00.781593  165060 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:06:00.781642  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:00.781706  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:00.817980  165060 cri.go:89] found id: "5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:00.818013  165060 cri.go:89] found id: ""
	I0617 12:06:00.818024  165060 logs.go:276] 1 containers: [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3]
	I0617 12:06:00.818080  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.822664  165060 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:00.822759  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:00.861518  165060 cri.go:89] found id: "fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:00.861545  165060 cri.go:89] found id: ""
	I0617 12:06:00.861556  165060 logs.go:276] 1 containers: [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9]
	I0617 12:06:00.861614  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.865885  165060 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:00.865973  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:00.900844  165060 cri.go:89] found id: "c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:00.900864  165060 cri.go:89] found id: ""
	I0617 12:06:00.900875  165060 logs.go:276] 1 containers: [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7]
	I0617 12:06:00.900930  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.905253  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:00.905317  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:00.938998  165060 cri.go:89] found id: "157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:00.939036  165060 cri.go:89] found id: ""
	I0617 12:06:00.939046  165060 logs.go:276] 1 containers: [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d]
	I0617 12:06:00.939114  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.943170  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:00.943234  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:00.982923  165060 cri.go:89] found id: "c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:00.982953  165060 cri.go:89] found id: ""
	I0617 12:06:00.982964  165060 logs.go:276] 1 containers: [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d]
	I0617 12:06:00.983034  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.987696  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:00.987769  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:01.033789  165060 cri.go:89] found id: "2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:06:01.033825  165060 cri.go:89] found id: ""
	I0617 12:06:01.033837  165060 logs.go:276] 1 containers: [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079]
	I0617 12:06:01.033901  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:01.038800  165060 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:01.038861  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:01.077797  165060 cri.go:89] found id: ""
	I0617 12:06:01.077834  165060 logs.go:276] 0 containers: []
	W0617 12:06:01.077846  165060 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:01.077855  165060 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:01.077916  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:01.116275  165060 cri.go:89] found id: "02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:01.116296  165060 cri.go:89] found id: "7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:01.116303  165060 cri.go:89] found id: ""
	I0617 12:06:01.116311  165060 logs.go:276] 2 containers: [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36]
	I0617 12:06:01.116365  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:01.121088  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:01.125393  165060 logs.go:123] Gathering logs for container status ...
	I0617 12:06:01.125417  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:01.170817  165060 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:01.170844  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:01.223072  165060 logs.go:123] Gathering logs for kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] ...
	I0617 12:06:01.223114  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:01.269212  165060 logs.go:123] Gathering logs for kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] ...
	I0617 12:06:01.269245  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:01.313518  165060 logs.go:123] Gathering logs for kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] ...
	I0617 12:06:01.313557  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:01.357935  165060 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:01.357965  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:01.784493  165060 logs.go:123] Gathering logs for storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] ...
	I0617 12:06:01.784542  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:01.825824  165060 logs.go:123] Gathering logs for storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] ...
	I0617 12:06:01.825851  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:01.866216  165060 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:01.866252  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:01.881292  165060 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:01.881316  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:02.000026  165060 logs.go:123] Gathering logs for etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] ...
	I0617 12:06:02.000063  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:02.043491  165060 logs.go:123] Gathering logs for coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] ...
	I0617 12:06:02.043524  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:02.081957  165060 logs.go:123] Gathering logs for kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] ...
	I0617 12:06:02.081984  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:05:59.835769  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:02.332739  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:03.080903  165698 out.go:204]   - Generating certificates and keys ...
	I0617 12:06:03.081006  165698 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:06:03.081080  165698 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:06:03.081168  165698 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:06:03.081250  165698 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:06:03.081377  165698 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:06:03.081457  165698 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:06:03.082418  165698 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:06:03.083003  165698 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:06:03.083917  165698 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:06:03.084820  165698 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:06:03.085224  165698 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:06:03.085307  165698 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:06:03.203342  165698 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:06:03.430428  165698 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:06:03.570422  165698 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:06:03.772092  165698 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:06:03.793105  165698 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:06:03.793206  165698 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:06:03.793261  165698 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:06:03.919738  165698 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:06:04.333408  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:06.333963  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:03.921593  165698 out.go:204]   - Booting up control plane ...
	I0617 12:06:03.921708  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:06:03.928168  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:06:03.928279  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:06:03.937197  165698 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:06:03.939967  165698 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 12:06:04.644102  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:06:04.648733  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 200:
	ok
	I0617 12:06:04.649862  165060 api_server.go:141] control plane version: v1.30.1
	I0617 12:06:04.649894  165060 api_server.go:131] duration metric: took 3.86829173s to wait for apiserver health ...
	I0617 12:06:04.649905  165060 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:06:04.649936  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:04.649997  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:04.688904  165060 cri.go:89] found id: "5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:04.688923  165060 cri.go:89] found id: ""
	I0617 12:06:04.688931  165060 logs.go:276] 1 containers: [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3]
	I0617 12:06:04.688975  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.695049  165060 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:04.695110  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:04.730292  165060 cri.go:89] found id: "fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:04.730314  165060 cri.go:89] found id: ""
	I0617 12:06:04.730322  165060 logs.go:276] 1 containers: [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9]
	I0617 12:06:04.730373  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.734432  165060 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:04.734486  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:04.771401  165060 cri.go:89] found id: "c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:04.771418  165060 cri.go:89] found id: ""
	I0617 12:06:04.771426  165060 logs.go:276] 1 containers: [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7]
	I0617 12:06:04.771496  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.775822  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:04.775876  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:04.816111  165060 cri.go:89] found id: "157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:04.816131  165060 cri.go:89] found id: ""
	I0617 12:06:04.816139  165060 logs.go:276] 1 containers: [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d]
	I0617 12:06:04.816185  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.820614  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:04.820672  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:04.865387  165060 cri.go:89] found id: "c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:04.865411  165060 cri.go:89] found id: ""
	I0617 12:06:04.865421  165060 logs.go:276] 1 containers: [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d]
	I0617 12:06:04.865479  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.870192  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:04.870263  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:04.912698  165060 cri.go:89] found id: "2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:06:04.912723  165060 cri.go:89] found id: ""
	I0617 12:06:04.912734  165060 logs.go:276] 1 containers: [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079]
	I0617 12:06:04.912796  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.917484  165060 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:04.917563  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:04.954076  165060 cri.go:89] found id: ""
	I0617 12:06:04.954109  165060 logs.go:276] 0 containers: []
	W0617 12:06:04.954120  165060 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:04.954129  165060 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:04.954196  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:04.995832  165060 cri.go:89] found id: "02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:04.995858  165060 cri.go:89] found id: "7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:04.995862  165060 cri.go:89] found id: ""
	I0617 12:06:04.995869  165060 logs.go:276] 2 containers: [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36]
	I0617 12:06:04.995928  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:05.000741  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:05.004995  165060 logs.go:123] Gathering logs for storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] ...
	I0617 12:06:05.005026  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:05.040651  165060 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:05.040692  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:05.461644  165060 logs.go:123] Gathering logs for container status ...
	I0617 12:06:05.461685  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:05.508706  165060 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:05.508733  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:05.562418  165060 logs.go:123] Gathering logs for kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] ...
	I0617 12:06:05.562461  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:05.606489  165060 logs.go:123] Gathering logs for etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] ...
	I0617 12:06:05.606527  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:05.651719  165060 logs.go:123] Gathering logs for coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] ...
	I0617 12:06:05.651753  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:05.688736  165060 logs.go:123] Gathering logs for kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] ...
	I0617 12:06:05.688772  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:05.730649  165060 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:05.730679  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:05.745482  165060 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:05.745511  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:05.849002  165060 logs.go:123] Gathering logs for kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] ...
	I0617 12:06:05.849025  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:05.890802  165060 logs.go:123] Gathering logs for kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] ...
	I0617 12:06:05.890836  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:06:05.946444  165060 logs.go:123] Gathering logs for storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] ...
	I0617 12:06:05.946474  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:04.332977  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:06.834683  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:08.489561  165060 system_pods.go:59] 8 kube-system pods found
	I0617 12:06:08.489593  165060 system_pods.go:61] "coredns-7db6d8ff4d-9bbjg" [1ba0eee5-436e-4c83-b5ce-3c907d66b641] Running
	I0617 12:06:08.489597  165060 system_pods.go:61] "etcd-embed-certs-136195" [6dc81a80-c56b-4517-af82-c450cf9578f5] Running
	I0617 12:06:08.489601  165060 system_pods.go:61] "kube-apiserver-embed-certs-136195" [bd61a715-2471-4dca-aa48-a157531ebd6b] Running
	I0617 12:06:08.489605  165060 system_pods.go:61] "kube-controller-manager-embed-certs-136195" [194db4b0-75c2-4905-8e4d-813185497b51] Running
	I0617 12:06:08.489607  165060 system_pods.go:61] "kube-proxy-25d5n" [52b6d09a-899f-40c4-b1f3-7842ae755165] Running
	I0617 12:06:08.489610  165060 system_pods.go:61] "kube-scheduler-embed-certs-136195" [b04d3798-f465-4f82-9ec7-777ea62d5b94] Running
	I0617 12:06:08.489616  165060 system_pods.go:61] "metrics-server-569cc877fc-dmhfs" [31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:08.489620  165060 system_pods.go:61] "storage-provisioner" [4b04a38a-5006-4496-a24d-0940029193de] Running
	I0617 12:06:08.489626  165060 system_pods.go:74] duration metric: took 3.839715717s to wait for pod list to return data ...
	I0617 12:06:08.489633  165060 default_sa.go:34] waiting for default service account to be created ...
	I0617 12:06:08.491984  165060 default_sa.go:45] found service account: "default"
	I0617 12:06:08.492007  165060 default_sa.go:55] duration metric: took 2.365306ms for default service account to be created ...
	I0617 12:06:08.492014  165060 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 12:06:08.497834  165060 system_pods.go:86] 8 kube-system pods found
	I0617 12:06:08.497865  165060 system_pods.go:89] "coredns-7db6d8ff4d-9bbjg" [1ba0eee5-436e-4c83-b5ce-3c907d66b641] Running
	I0617 12:06:08.497873  165060 system_pods.go:89] "etcd-embed-certs-136195" [6dc81a80-c56b-4517-af82-c450cf9578f5] Running
	I0617 12:06:08.497880  165060 system_pods.go:89] "kube-apiserver-embed-certs-136195" [bd61a715-2471-4dca-aa48-a157531ebd6b] Running
	I0617 12:06:08.497887  165060 system_pods.go:89] "kube-controller-manager-embed-certs-136195" [194db4b0-75c2-4905-8e4d-813185497b51] Running
	I0617 12:06:08.497891  165060 system_pods.go:89] "kube-proxy-25d5n" [52b6d09a-899f-40c4-b1f3-7842ae755165] Running
	I0617 12:06:08.497899  165060 system_pods.go:89] "kube-scheduler-embed-certs-136195" [b04d3798-f465-4f82-9ec7-777ea62d5b94] Running
	I0617 12:06:08.497905  165060 system_pods.go:89] "metrics-server-569cc877fc-dmhfs" [31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:08.497914  165060 system_pods.go:89] "storage-provisioner" [4b04a38a-5006-4496-a24d-0940029193de] Running
	I0617 12:06:08.497921  165060 system_pods.go:126] duration metric: took 5.901391ms to wait for k8s-apps to be running ...
	I0617 12:06:08.497927  165060 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 12:06:08.497970  165060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:08.520136  165060 system_svc.go:56] duration metric: took 22.203601ms WaitForService to wait for kubelet
	I0617 12:06:08.520159  165060 kubeadm.go:576] duration metric: took 4m22.614222011s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:06:08.520178  165060 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:06:08.522704  165060 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:06:08.522741  165060 node_conditions.go:123] node cpu capacity is 2
	I0617 12:06:08.522758  165060 node_conditions.go:105] duration metric: took 2.57391ms to run NodePressure ...
	I0617 12:06:08.522773  165060 start.go:240] waiting for startup goroutines ...
	I0617 12:06:08.522787  165060 start.go:245] waiting for cluster config update ...
	I0617 12:06:08.522803  165060 start.go:254] writing updated cluster config ...
	I0617 12:06:08.523139  165060 ssh_runner.go:195] Run: rm -f paused
	I0617 12:06:08.577942  165060 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 12:06:08.579946  165060 out.go:177] * Done! kubectl is now configured to use "embed-certs-136195" cluster and "default" namespace by default
	I0617 12:06:08.334463  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:10.335642  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:09.331628  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:11.332586  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:13.332703  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:12.834827  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:15.334721  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:15.333004  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:17.834357  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:17.833756  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:19.835364  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:22.333742  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:20.332127  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:22.832111  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:24.333945  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:26.335021  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:25.332366  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:27.835364  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:28.833758  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:31.334155  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:29.835500  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:32.332236  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:33.833599  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:35.834190  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:34.831122  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:36.833202  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:38.334352  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:40.335399  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:40.335423  166103 pod_ready.go:81] duration metric: took 4m0.008367222s for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	E0617 12:06:40.335433  166103 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0617 12:06:40.335441  166103 pod_ready.go:38] duration metric: took 4m7.419505963s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:06:40.335475  166103 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:06:40.335505  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:40.335556  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:40.400354  166103 cri.go:89] found id: "5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:40.400384  166103 cri.go:89] found id: ""
	I0617 12:06:40.400394  166103 logs.go:276] 1 containers: [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b]
	I0617 12:06:40.400453  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.405124  166103 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:40.405186  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:40.440583  166103 cri.go:89] found id: "8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:40.440610  166103 cri.go:89] found id: ""
	I0617 12:06:40.440619  166103 logs.go:276] 1 containers: [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862]
	I0617 12:06:40.440665  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.445086  166103 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:40.445141  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:40.489676  166103 cri.go:89] found id: "26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:40.489698  166103 cri.go:89] found id: ""
	I0617 12:06:40.489706  166103 logs.go:276] 1 containers: [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323]
	I0617 12:06:40.489752  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.494402  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:40.494514  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:40.535486  166103 cri.go:89] found id: "2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:40.535517  166103 cri.go:89] found id: ""
	I0617 12:06:40.535527  166103 logs.go:276] 1 containers: [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b]
	I0617 12:06:40.535589  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.543265  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:40.543330  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:40.579564  166103 cri.go:89] found id: "63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:40.579588  166103 cri.go:89] found id: ""
	I0617 12:06:40.579598  166103 logs.go:276] 1 containers: [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da]
	I0617 12:06:40.579658  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.583865  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:40.583928  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:40.642408  166103 cri.go:89] found id: "36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:40.642435  166103 cri.go:89] found id: ""
	I0617 12:06:40.642445  166103 logs.go:276] 1 containers: [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685]
	I0617 12:06:40.642509  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.647892  166103 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:40.647959  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:40.698654  166103 cri.go:89] found id: ""
	I0617 12:06:40.698686  166103 logs.go:276] 0 containers: []
	W0617 12:06:40.698696  166103 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:40.698704  166103 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:40.698768  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:40.749641  166103 cri.go:89] found id: "adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:40.749663  166103 cri.go:89] found id: "e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:40.749668  166103 cri.go:89] found id: ""
	I0617 12:06:40.749678  166103 logs.go:276] 2 containers: [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc]
	I0617 12:06:40.749742  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.754926  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.760126  166103 logs.go:123] Gathering logs for container status ...
	I0617 12:06:40.760152  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:40.804119  166103 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:40.804159  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:40.942459  166103 logs.go:123] Gathering logs for etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] ...
	I0617 12:06:40.942495  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:40.994721  166103 logs.go:123] Gathering logs for coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] ...
	I0617 12:06:40.994761  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:41.037005  166103 logs.go:123] Gathering logs for kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] ...
	I0617 12:06:41.037040  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:41.080715  166103 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:41.080751  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:41.606478  166103 logs.go:123] Gathering logs for storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] ...
	I0617 12:06:41.606516  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:41.643963  166103 logs.go:123] Gathering logs for storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] ...
	I0617 12:06:41.644003  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:41.683405  166103 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:41.683443  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:41.737365  166103 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:41.737400  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:41.752552  166103 logs.go:123] Gathering logs for kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] ...
	I0617 12:06:41.752582  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:41.804447  166103 logs.go:123] Gathering logs for kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] ...
	I0617 12:06:41.804480  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:41.847266  166103 logs.go:123] Gathering logs for kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] ...
	I0617 12:06:41.847302  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:39.333111  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:41.836327  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:44.408776  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:44.427500  166103 api_server.go:72] duration metric: took 4m19.25316479s to wait for apiserver process to appear ...
	I0617 12:06:44.427531  166103 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:06:44.427577  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:44.427634  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:44.466379  166103 cri.go:89] found id: "5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:44.466408  166103 cri.go:89] found id: ""
	I0617 12:06:44.466418  166103 logs.go:276] 1 containers: [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b]
	I0617 12:06:44.466481  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.470832  166103 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:44.470901  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:44.511689  166103 cri.go:89] found id: "8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:44.511713  166103 cri.go:89] found id: ""
	I0617 12:06:44.511722  166103 logs.go:276] 1 containers: [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862]
	I0617 12:06:44.511769  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.516221  166103 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:44.516303  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:44.560612  166103 cri.go:89] found id: "26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:44.560634  166103 cri.go:89] found id: ""
	I0617 12:06:44.560642  166103 logs.go:276] 1 containers: [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323]
	I0617 12:06:44.560695  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.564998  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:44.565068  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:44.600133  166103 cri.go:89] found id: "2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:44.600155  166103 cri.go:89] found id: ""
	I0617 12:06:44.600164  166103 logs.go:276] 1 containers: [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b]
	I0617 12:06:44.600220  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.605431  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:44.605494  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:44.648647  166103 cri.go:89] found id: "63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:44.648678  166103 cri.go:89] found id: ""
	I0617 12:06:44.648688  166103 logs.go:276] 1 containers: [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da]
	I0617 12:06:44.648758  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.653226  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:44.653307  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:44.701484  166103 cri.go:89] found id: "36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:44.701508  166103 cri.go:89] found id: ""
	I0617 12:06:44.701516  166103 logs.go:276] 1 containers: [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685]
	I0617 12:06:44.701572  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.707827  166103 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:44.707890  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:44.752362  166103 cri.go:89] found id: ""
	I0617 12:06:44.752391  166103 logs.go:276] 0 containers: []
	W0617 12:06:44.752402  166103 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:44.752410  166103 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:44.752473  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:44.798926  166103 cri.go:89] found id: "adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:44.798955  166103 cri.go:89] found id: "e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:44.798961  166103 cri.go:89] found id: ""
	I0617 12:06:44.798970  166103 logs.go:276] 2 containers: [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc]
	I0617 12:06:44.799038  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.804702  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.810673  166103 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:44.810702  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:44.939596  166103 logs.go:123] Gathering logs for etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] ...
	I0617 12:06:44.939627  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:44.987902  166103 logs.go:123] Gathering logs for coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] ...
	I0617 12:06:44.987936  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:45.023931  166103 logs.go:123] Gathering logs for kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] ...
	I0617 12:06:45.023962  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:45.060432  166103 logs.go:123] Gathering logs for storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] ...
	I0617 12:06:45.060468  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:45.095643  166103 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:45.095679  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:45.553973  166103 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:45.554018  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:45.611997  166103 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:45.612036  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:45.626973  166103 logs.go:123] Gathering logs for container status ...
	I0617 12:06:45.627002  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:45.671119  166103 logs.go:123] Gathering logs for kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] ...
	I0617 12:06:45.671151  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:45.728097  166103 logs.go:123] Gathering logs for storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] ...
	I0617 12:06:45.728133  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:45.765586  166103 logs.go:123] Gathering logs for kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] ...
	I0617 12:06:45.765615  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:45.818347  166103 logs.go:123] Gathering logs for kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] ...
	I0617 12:06:45.818387  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:43.941225  165698 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0617 12:06:43.941341  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:43.941612  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:06:44.331481  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:46.831820  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:48.362826  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:06:48.366936  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 200:
	ok
	I0617 12:06:48.367973  166103 api_server.go:141] control plane version: v1.30.1
	I0617 12:06:48.367992  166103 api_server.go:131] duration metric: took 3.940452539s to wait for apiserver health ...
	I0617 12:06:48.367999  166103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:06:48.368021  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:48.368066  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:48.404797  166103 cri.go:89] found id: "5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:48.404819  166103 cri.go:89] found id: ""
	I0617 12:06:48.404828  166103 logs.go:276] 1 containers: [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b]
	I0617 12:06:48.404887  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.409105  166103 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:48.409162  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:48.456233  166103 cri.go:89] found id: "8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:48.456266  166103 cri.go:89] found id: ""
	I0617 12:06:48.456277  166103 logs.go:276] 1 containers: [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862]
	I0617 12:06:48.456336  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.460550  166103 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:48.460625  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:48.498447  166103 cri.go:89] found id: "26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:48.498472  166103 cri.go:89] found id: ""
	I0617 12:06:48.498481  166103 logs.go:276] 1 containers: [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323]
	I0617 12:06:48.498564  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.503826  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:48.503906  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:48.554405  166103 cri.go:89] found id: "2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:48.554435  166103 cri.go:89] found id: ""
	I0617 12:06:48.554446  166103 logs.go:276] 1 containers: [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b]
	I0617 12:06:48.554504  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.559175  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:48.559240  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:48.596764  166103 cri.go:89] found id: "63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:48.596791  166103 cri.go:89] found id: ""
	I0617 12:06:48.596801  166103 logs.go:276] 1 containers: [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da]
	I0617 12:06:48.596863  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.601197  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:48.601260  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:48.654027  166103 cri.go:89] found id: "36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:48.654053  166103 cri.go:89] found id: ""
	I0617 12:06:48.654061  166103 logs.go:276] 1 containers: [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685]
	I0617 12:06:48.654113  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.659492  166103 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:48.659557  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:48.706749  166103 cri.go:89] found id: ""
	I0617 12:06:48.706777  166103 logs.go:276] 0 containers: []
	W0617 12:06:48.706786  166103 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:48.706794  166103 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:48.706859  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:48.750556  166103 cri.go:89] found id: "adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:48.750588  166103 cri.go:89] found id: "e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:48.750594  166103 cri.go:89] found id: ""
	I0617 12:06:48.750607  166103 logs.go:276] 2 containers: [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc]
	I0617 12:06:48.750671  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.755368  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.760128  166103 logs.go:123] Gathering logs for kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] ...
	I0617 12:06:48.760154  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:48.802187  166103 logs.go:123] Gathering logs for etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] ...
	I0617 12:06:48.802224  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:48.861041  166103 logs.go:123] Gathering logs for kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] ...
	I0617 12:06:48.861076  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:48.917864  166103 logs.go:123] Gathering logs for storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] ...
	I0617 12:06:48.917902  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:48.963069  166103 logs.go:123] Gathering logs for container status ...
	I0617 12:06:48.963099  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:49.012109  166103 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:49.012149  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:49.119880  166103 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:49.119915  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:49.136461  166103 logs.go:123] Gathering logs for coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] ...
	I0617 12:06:49.136497  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:49.177339  166103 logs.go:123] Gathering logs for kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] ...
	I0617 12:06:49.177377  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:49.219101  166103 logs.go:123] Gathering logs for kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] ...
	I0617 12:06:49.219135  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:49.256646  166103 logs.go:123] Gathering logs for storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] ...
	I0617 12:06:49.256687  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:49.302208  166103 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:49.302243  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:49.653713  166103 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:49.653758  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:52.217069  166103 system_pods.go:59] 8 kube-system pods found
	I0617 12:06:52.217102  166103 system_pods.go:61] "coredns-7db6d8ff4d-mnw24" [1e6c4ff3-f0dc-43da-abd8-baaed7dca40c] Running
	I0617 12:06:52.217107  166103 system_pods.go:61] "etcd-default-k8s-diff-port-991309" [820a4f27-cf83-4edb-a2ea-edba6673d851] Running
	I0617 12:06:52.217111  166103 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991309" [26e6c19d-6f70-4924-83f5-563c8508c9e3] Running
	I0617 12:06:52.217115  166103 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991309" [01e7c468-98a6-48f3-a158-59e97fa8279c] Running
	I0617 12:06:52.217119  166103 system_pods.go:61] "kube-proxy-jn5kp" [d6935148-7ee8-4655-8327-9f1ee4c933de] Running
	I0617 12:06:52.217122  166103 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991309" [53ecd22c-05cf-48a5-b7e5-925392085f7a] Running
	I0617 12:06:52.217128  166103 system_pods.go:61] "metrics-server-569cc877fc-n2svp" [5b637d97-3183-4324-98cf-dd69a2968578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:52.217134  166103 system_pods.go:61] "storage-provisioner" [92b20aec-29c2-4256-86be-7f58f66585dd] Running
	I0617 12:06:52.217145  166103 system_pods.go:74] duration metric: took 3.849140024s to wait for pod list to return data ...
	I0617 12:06:52.217152  166103 default_sa.go:34] waiting for default service account to be created ...
	I0617 12:06:52.219308  166103 default_sa.go:45] found service account: "default"
	I0617 12:06:52.219330  166103 default_sa.go:55] duration metric: took 2.172323ms for default service account to be created ...
	I0617 12:06:52.219339  166103 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 12:06:52.224239  166103 system_pods.go:86] 8 kube-system pods found
	I0617 12:06:52.224265  166103 system_pods.go:89] "coredns-7db6d8ff4d-mnw24" [1e6c4ff3-f0dc-43da-abd8-baaed7dca40c] Running
	I0617 12:06:52.224270  166103 system_pods.go:89] "etcd-default-k8s-diff-port-991309" [820a4f27-cf83-4edb-a2ea-edba6673d851] Running
	I0617 12:06:52.224276  166103 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-991309" [26e6c19d-6f70-4924-83f5-563c8508c9e3] Running
	I0617 12:06:52.224280  166103 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-991309" [01e7c468-98a6-48f3-a158-59e97fa8279c] Running
	I0617 12:06:52.224284  166103 system_pods.go:89] "kube-proxy-jn5kp" [d6935148-7ee8-4655-8327-9f1ee4c933de] Running
	I0617 12:06:52.224288  166103 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-991309" [53ecd22c-05cf-48a5-b7e5-925392085f7a] Running
	I0617 12:06:52.224299  166103 system_pods.go:89] "metrics-server-569cc877fc-n2svp" [5b637d97-3183-4324-98cf-dd69a2968578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:52.224305  166103 system_pods.go:89] "storage-provisioner" [92b20aec-29c2-4256-86be-7f58f66585dd] Running
	I0617 12:06:52.224319  166103 system_pods.go:126] duration metric: took 4.973603ms to wait for k8s-apps to be running ...
	I0617 12:06:52.224332  166103 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 12:06:52.224380  166103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:52.241121  166103 system_svc.go:56] duration metric: took 16.776061ms WaitForService to wait for kubelet
	I0617 12:06:52.241156  166103 kubeadm.go:576] duration metric: took 4m27.066827271s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:06:52.241181  166103 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:06:52.245359  166103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:06:52.245407  166103 node_conditions.go:123] node cpu capacity is 2
	I0617 12:06:52.245423  166103 node_conditions.go:105] duration metric: took 4.235898ms to run NodePressure ...
	I0617 12:06:52.245440  166103 start.go:240] waiting for startup goroutines ...
	I0617 12:06:52.245449  166103 start.go:245] waiting for cluster config update ...
	I0617 12:06:52.245462  166103 start.go:254] writing updated cluster config ...
	I0617 12:06:52.245969  166103 ssh_runner.go:195] Run: rm -f paused
	I0617 12:06:52.299326  166103 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 12:06:52.301413  166103 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-991309" cluster and "default" namespace by default
	I0617 12:06:48.942159  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:48.942434  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:06:48.835113  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:51.331395  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:53.331551  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:55.332455  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:57.835143  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:58.942977  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:58.943290  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:00.331823  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:02.332214  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:04.831284  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:06.832082  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:07.325414  164809 pod_ready.go:81] duration metric: took 4m0.000322555s for pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace to be "Ready" ...
	E0617 12:07:07.325446  164809 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0617 12:07:07.325464  164809 pod_ready.go:38] duration metric: took 4m12.035995337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:07:07.325494  164809 kubeadm.go:591] duration metric: took 4m19.041266463s to restartPrimaryControlPlane
	W0617 12:07:07.325556  164809 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0617 12:07:07.325587  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:07:18.944149  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:07:18.944368  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:38.980378  164809 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.654762508s)
	I0617 12:07:38.980451  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:07:38.997845  164809 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:07:39.009456  164809 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:07:39.020407  164809 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:07:39.020430  164809 kubeadm.go:156] found existing configuration files:
	
	I0617 12:07:39.020472  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:07:39.030323  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:07:39.030376  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:07:39.040298  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:07:39.049715  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:07:39.049757  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:07:39.060493  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:07:39.069921  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:07:39.069973  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:07:39.080049  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:07:39.089524  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:07:39.089569  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:07:39.099082  164809 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:07:39.154963  164809 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0617 12:07:39.155083  164809 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:07:39.286616  164809 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:07:39.286809  164809 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:07:39.286977  164809 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:07:39.487542  164809 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:07:39.489554  164809 out.go:204]   - Generating certificates and keys ...
	I0617 12:07:39.489665  164809 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:07:39.489732  164809 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:07:39.489855  164809 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:07:39.489969  164809 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:07:39.490088  164809 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:07:39.490187  164809 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:07:39.490274  164809 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:07:39.490386  164809 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:07:39.490508  164809 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:07:39.490643  164809 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:07:39.490750  164809 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:07:39.490849  164809 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:07:39.565788  164809 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:07:39.643443  164809 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0617 12:07:39.765615  164809 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:07:39.851182  164809 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:07:40.041938  164809 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:07:40.042576  164809 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:07:40.045112  164809 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:07:40.047144  164809 out.go:204]   - Booting up control plane ...
	I0617 12:07:40.047265  164809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:07:40.047374  164809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:07:40.047995  164809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:07:40.070163  164809 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:07:40.071308  164809 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:07:40.071415  164809 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:07:40.204578  164809 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0617 12:07:40.204698  164809 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0617 12:07:41.210782  164809 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.0065421s
	I0617 12:07:41.210902  164809 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0617 12:07:45.713194  164809 kubeadm.go:309] [api-check] The API server is healthy after 4.501871798s
	I0617 12:07:45.735311  164809 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0617 12:07:45.760405  164809 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0617 12:07:45.795429  164809 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0617 12:07:45.795770  164809 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-152830 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0617 12:07:45.816446  164809 kubeadm.go:309] [bootstrap-token] Using token: ryfqxd.olkegn8a1unpvnbq
	I0617 12:07:45.817715  164809 out.go:204]   - Configuring RBAC rules ...
	I0617 12:07:45.817890  164809 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0617 12:07:45.826422  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0617 12:07:45.852291  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0617 12:07:45.867538  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0617 12:07:45.880697  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0617 12:07:45.887707  164809 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0617 12:07:46.120211  164809 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0617 12:07:46.593168  164809 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0617 12:07:47.119377  164809 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0617 12:07:47.120840  164809 kubeadm.go:309] 
	I0617 12:07:47.120933  164809 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0617 12:07:47.120947  164809 kubeadm.go:309] 
	I0617 12:07:47.121057  164809 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0617 12:07:47.121069  164809 kubeadm.go:309] 
	I0617 12:07:47.121123  164809 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0617 12:07:47.124361  164809 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0617 12:07:47.124443  164809 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0617 12:07:47.124464  164809 kubeadm.go:309] 
	I0617 12:07:47.124538  164809 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0617 12:07:47.124550  164809 kubeadm.go:309] 
	I0617 12:07:47.124607  164809 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0617 12:07:47.124617  164809 kubeadm.go:309] 
	I0617 12:07:47.124724  164809 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0617 12:07:47.124838  164809 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0617 12:07:47.124938  164809 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0617 12:07:47.124949  164809 kubeadm.go:309] 
	I0617 12:07:47.125085  164809 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0617 12:07:47.125191  164809 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0617 12:07:47.125203  164809 kubeadm.go:309] 
	I0617 12:07:47.125343  164809 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ryfqxd.olkegn8a1unpvnbq \
	I0617 12:07:47.125479  164809 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 \
	I0617 12:07:47.125510  164809 kubeadm.go:309] 	--control-plane 
	I0617 12:07:47.125518  164809 kubeadm.go:309] 
	I0617 12:07:47.125616  164809 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0617 12:07:47.125627  164809 kubeadm.go:309] 
	I0617 12:07:47.125724  164809 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ryfqxd.olkegn8a1unpvnbq \
	I0617 12:07:47.125852  164809 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 
	I0617 12:07:47.126915  164809 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:07:47.126966  164809 cni.go:84] Creating CNI manager for ""
	I0617 12:07:47.126983  164809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:07:47.128899  164809 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:07:47.130229  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:07:47.142301  164809 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:07:47.163380  164809 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:07:47.163500  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:47.163503  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-152830 minikube.k8s.io/updated_at=2024_06_17T12_07_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6 minikube.k8s.io/name=no-preload-152830 minikube.k8s.io/primary=true
	I0617 12:07:47.375089  164809 ops.go:34] apiserver oom_adj: -16
	I0617 12:07:47.375266  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:47.875477  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:48.375626  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:48.876185  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:49.375621  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:49.875597  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:50.376188  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:50.875983  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:51.375537  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:51.876321  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:52.375920  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:52.876348  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:53.375623  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:53.875369  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:54.375747  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:54.875581  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:55.376244  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:55.875866  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:56.376285  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:56.876228  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:57.375990  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:57.875392  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:58.946943  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:07:58.947220  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:58.947233  165698 kubeadm.go:309] 
	I0617 12:07:58.947316  165698 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0617 12:07:58.947393  165698 kubeadm.go:309] 		timed out waiting for the condition
	I0617 12:07:58.947406  165698 kubeadm.go:309] 
	I0617 12:07:58.947449  165698 kubeadm.go:309] 	This error is likely caused by:
	I0617 12:07:58.947528  165698 kubeadm.go:309] 		- The kubelet is not running
	I0617 12:07:58.947690  165698 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0617 12:07:58.947699  165698 kubeadm.go:309] 
	I0617 12:07:58.947860  165698 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0617 12:07:58.947924  165698 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0617 12:07:58.947976  165698 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0617 12:07:58.947991  165698 kubeadm.go:309] 
	I0617 12:07:58.948132  165698 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0617 12:07:58.948247  165698 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0617 12:07:58.948260  165698 kubeadm.go:309] 
	I0617 12:07:58.948406  165698 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0617 12:07:58.948539  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0617 12:07:58.948639  165698 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0617 12:07:58.948740  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0617 12:07:58.948750  165698 kubeadm.go:309] 
	I0617 12:07:58.949270  165698 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:07:58.949403  165698 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0617 12:07:58.949508  165698 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0617 12:07:58.949630  165698 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0617 12:07:58.949694  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:07:59.418622  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:07:59.435367  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:07:59.449365  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:07:59.449384  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:07:59.449430  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:07:59.461411  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:07:59.461478  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:07:59.471262  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:07:59.480591  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:07:59.480640  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:07:59.490152  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:07:59.499248  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:07:59.499300  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:07:59.508891  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:07:59.518114  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:07:59.518152  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:07:59.528190  165698 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:07:59.592831  165698 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0617 12:07:59.592949  165698 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:07:59.752802  165698 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:07:59.752947  165698 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:07:59.753079  165698 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:07:59.984221  165698 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:07:58.375522  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:58.876221  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:59.375941  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:59.875924  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:08:00.063788  164809 kubeadm.go:1107] duration metric: took 12.900376954s to wait for elevateKubeSystemPrivileges
	W0617 12:08:00.063860  164809 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0617 12:08:00.063871  164809 kubeadm.go:393] duration metric: took 5m11.831587226s to StartCluster
	I0617 12:08:00.063895  164809 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:08:00.063996  164809 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:08:00.066593  164809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:08:00.066922  164809 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:08:00.068556  164809 out.go:177] * Verifying Kubernetes components...
	I0617 12:08:00.067029  164809 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:08:00.067131  164809 config.go:182] Loaded profile config "no-preload-152830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:08:00.069969  164809 addons.go:69] Setting storage-provisioner=true in profile "no-preload-152830"
	I0617 12:08:00.069983  164809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:08:00.069992  164809 addons.go:69] Setting metrics-server=true in profile "no-preload-152830"
	I0617 12:08:00.070015  164809 addons.go:234] Setting addon metrics-server=true in "no-preload-152830"
	I0617 12:08:00.070014  164809 addons.go:234] Setting addon storage-provisioner=true in "no-preload-152830"
	W0617 12:08:00.070021  164809 addons.go:243] addon metrics-server should already be in state true
	W0617 12:08:00.070024  164809 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:08:00.070055  164809 host.go:66] Checking if "no-preload-152830" exists ...
	I0617 12:08:00.070057  164809 host.go:66] Checking if "no-preload-152830" exists ...
	I0617 12:08:00.069984  164809 addons.go:69] Setting default-storageclass=true in profile "no-preload-152830"
	I0617 12:08:00.070116  164809 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-152830"
	I0617 12:08:00.070426  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.070428  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.070443  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.070451  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.070475  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.070494  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.088451  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I0617 12:08:00.089105  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.089673  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.089700  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.090074  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.090673  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.090723  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.091118  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0617 12:08:00.091150  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I0617 12:08:00.091756  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.091880  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.092306  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.092327  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.092470  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.092487  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.093006  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.093081  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.093169  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.093683  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.093722  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.096819  164809 addons.go:234] Setting addon default-storageclass=true in "no-preload-152830"
	W0617 12:08:00.096839  164809 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:08:00.096868  164809 host.go:66] Checking if "no-preload-152830" exists ...
	I0617 12:08:00.097223  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.097252  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.110063  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33623
	I0617 12:08:00.110843  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.111489  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.111509  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.112419  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.112633  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.112859  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39555
	I0617 12:08:00.113245  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.113927  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.113946  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.114470  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.114758  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:08:00.116377  164809 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:08:00.115146  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.117266  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I0617 12:08:00.117647  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:08:00.117663  164809 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:08:00.117674  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.117681  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:08:00.118504  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.119076  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.119091  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.119440  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.119755  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.121396  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.121620  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:08:00.123146  164809 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:07:59.986165  165698 out.go:204]   - Generating certificates and keys ...
	I0617 12:07:59.986270  165698 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:07:59.986391  165698 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:07:59.986522  165698 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:07:59.986606  165698 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:07:59.986717  165698 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:07:59.986795  165698 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:07:59.986887  165698 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:07:59.986972  165698 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:07:59.987081  165698 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:07:59.987191  165698 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:07:59.987250  165698 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:07:59.987331  165698 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:08:00.155668  165698 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:08:00.303780  165698 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:08:00.369907  165698 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:08:00.506550  165698 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:08:00.529943  165698 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:08:00.531684  165698 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:08:00.531756  165698 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:08:00.667972  165698 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:08:00.122003  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:08:00.122146  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:08:00.124748  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.124895  164809 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:08:00.124914  164809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:08:00.124934  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:08:00.124957  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:08:00.125142  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:08:00.125446  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:08:00.128559  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.128991  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:08:00.129011  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.129239  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:08:00.129434  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:08:00.129537  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:08:00.129640  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:08:00.142435  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39073
	I0617 12:08:00.142915  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.143550  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.143583  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.143946  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.144168  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.145972  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:08:00.146165  164809 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:08:00.146178  164809 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:08:00.146196  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:08:00.149316  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.149720  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:08:00.149743  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.149926  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:08:00.150106  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:08:00.150273  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:08:00.150434  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:08:00.294731  164809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:08:00.317727  164809 node_ready.go:35] waiting up to 6m0s for node "no-preload-152830" to be "Ready" ...
	I0617 12:08:00.346507  164809 node_ready.go:49] node "no-preload-152830" has status "Ready":"True"
	I0617 12:08:00.346533  164809 node_ready.go:38] duration metric: took 28.776898ms for node "no-preload-152830" to be "Ready" ...
	I0617 12:08:00.346544  164809 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:08:00.404097  164809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:00.412303  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:08:00.412325  164809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:08:00.415269  164809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:08:00.438024  164809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:08:00.514528  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:08:00.514561  164809 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:08:00.629109  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:08:00.629141  164809 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:08:00.677084  164809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:08:01.113979  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.114007  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.114432  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.114445  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.114507  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.114526  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.114536  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.114846  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.114866  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.117124  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.117141  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.117437  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.117457  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.117478  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.117496  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.117508  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.117821  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.117858  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.117882  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.125648  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.125668  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.125998  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.126020  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.126030  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.325217  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.325242  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.325579  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.325633  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.325669  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.325669  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.325682  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.325960  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.325977  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.326007  164809 addons.go:475] Verifying addon metrics-server=true in "no-preload-152830"
	I0617 12:08:01.326037  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.327744  164809 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0617 12:08:00.671036  165698 out.go:204]   - Booting up control plane ...
	I0617 12:08:00.671171  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:08:00.677241  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:08:00.678999  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:08:00.681119  165698 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:08:00.684535  165698 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 12:08:01.329155  164809 addons.go:510] duration metric: took 1.262127108s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0617 12:08:02.425731  164809 pod_ready.go:102] pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace has status "Ready":"False"
	I0617 12:08:03.910467  164809 pod_ready.go:92] pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.910494  164809 pod_ready.go:81] duration metric: took 3.506370946s for pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.910508  164809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vz7dg" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.916309  164809 pod_ready.go:92] pod "coredns-7db6d8ff4d-vz7dg" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.916331  164809 pod_ready.go:81] duration metric: took 5.814812ms for pod "coredns-7db6d8ff4d-vz7dg" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.916340  164809 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.920834  164809 pod_ready.go:92] pod "etcd-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.920862  164809 pod_ready.go:81] duration metric: took 4.51438ms for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.920874  164809 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.924955  164809 pod_ready.go:92] pod "kube-apiserver-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.924973  164809 pod_ready.go:81] duration metric: took 4.09301ms for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.924982  164809 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.929301  164809 pod_ready.go:92] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.929318  164809 pod_ready.go:81] duration metric: took 4.33061ms for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.929326  164809 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:04.308546  164809 pod_ready.go:92] pod "kube-scheduler-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:04.308570  164809 pod_ready.go:81] duration metric: took 379.237147ms for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:04.308578  164809 pod_ready.go:38] duration metric: took 3.962022714s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:08:04.308594  164809 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:08:04.308644  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:08:04.327383  164809 api_server.go:72] duration metric: took 4.260420928s to wait for apiserver process to appear ...
	I0617 12:08:04.327408  164809 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:08:04.327426  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:08:04.332321  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0617 12:08:04.333390  164809 api_server.go:141] control plane version: v1.30.1
	I0617 12:08:04.333412  164809 api_server.go:131] duration metric: took 5.998312ms to wait for apiserver health ...
	I0617 12:08:04.333420  164809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:08:04.512267  164809 system_pods.go:59] 9 kube-system pods found
	I0617 12:08:04.512298  164809 system_pods.go:61] "coredns-7db6d8ff4d-gjt84" [979c7339-3a4c-4bc8-8586-4d9da42339ae] Running
	I0617 12:08:04.512302  164809 system_pods.go:61] "coredns-7db6d8ff4d-vz7dg" [53c5188e-bc44-4aed-a989-ef3e2379c27b] Running
	I0617 12:08:04.512306  164809 system_pods.go:61] "etcd-no-preload-152830" [2b82d709-0776-470a-a538-f132b84be2e0] Running
	I0617 12:08:04.512310  164809 system_pods.go:61] "kube-apiserver-no-preload-152830" [e40c7c7b-b029-4f65-ac36-f4ff95eabc23] Running
	I0617 12:08:04.512313  164809 system_pods.go:61] "kube-controller-manager-no-preload-152830" [c2adec58-05a4-4993-b9a3-28f9ef519a63] Running
	I0617 12:08:04.512317  164809 system_pods.go:61] "kube-proxy-6c4hm" [a9830236-af96-437f-ad07-494b25f1a90e] Running
	I0617 12:08:04.512319  164809 system_pods.go:61] "kube-scheduler-no-preload-152830" [876671da-097b-43c1-9055-95c2ed7620aa] Running
	I0617 12:08:04.512325  164809 system_pods.go:61] "metrics-server-569cc877fc-zllzk" [e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:08:04.512329  164809 system_pods.go:61] "storage-provisioner" [b6cc7cdc-43f4-40c4-a202-5674fcdcedd0] Running
	I0617 12:08:04.512340  164809 system_pods.go:74] duration metric: took 178.914377ms to wait for pod list to return data ...
	I0617 12:08:04.512347  164809 default_sa.go:34] waiting for default service account to be created ...
	I0617 12:08:04.707834  164809 default_sa.go:45] found service account: "default"
	I0617 12:08:04.707874  164809 default_sa.go:55] duration metric: took 195.518331ms for default service account to be created ...
	I0617 12:08:04.707886  164809 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 12:08:04.916143  164809 system_pods.go:86] 9 kube-system pods found
	I0617 12:08:04.916173  164809 system_pods.go:89] "coredns-7db6d8ff4d-gjt84" [979c7339-3a4c-4bc8-8586-4d9da42339ae] Running
	I0617 12:08:04.916178  164809 system_pods.go:89] "coredns-7db6d8ff4d-vz7dg" [53c5188e-bc44-4aed-a989-ef3e2379c27b] Running
	I0617 12:08:04.916183  164809 system_pods.go:89] "etcd-no-preload-152830" [2b82d709-0776-470a-a538-f132b84be2e0] Running
	I0617 12:08:04.916187  164809 system_pods.go:89] "kube-apiserver-no-preload-152830" [e40c7c7b-b029-4f65-ac36-f4ff95eabc23] Running
	I0617 12:08:04.916191  164809 system_pods.go:89] "kube-controller-manager-no-preload-152830" [c2adec58-05a4-4993-b9a3-28f9ef519a63] Running
	I0617 12:08:04.916195  164809 system_pods.go:89] "kube-proxy-6c4hm" [a9830236-af96-437f-ad07-494b25f1a90e] Running
	I0617 12:08:04.916199  164809 system_pods.go:89] "kube-scheduler-no-preload-152830" [876671da-097b-43c1-9055-95c2ed7620aa] Running
	I0617 12:08:04.916211  164809 system_pods.go:89] "metrics-server-569cc877fc-zllzk" [e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:08:04.916219  164809 system_pods.go:89] "storage-provisioner" [b6cc7cdc-43f4-40c4-a202-5674fcdcedd0] Running
	I0617 12:08:04.916231  164809 system_pods.go:126] duration metric: took 208.336851ms to wait for k8s-apps to be running ...
	I0617 12:08:04.916245  164809 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 12:08:04.916306  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:08:04.933106  164809 system_svc.go:56] duration metric: took 16.850122ms WaitForService to wait for kubelet
	I0617 12:08:04.933135  164809 kubeadm.go:576] duration metric: took 4.866178671s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:08:04.933159  164809 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:08:05.108094  164809 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:08:05.108120  164809 node_conditions.go:123] node cpu capacity is 2
	I0617 12:08:05.108133  164809 node_conditions.go:105] duration metric: took 174.968414ms to run NodePressure ...
	I0617 12:08:05.108148  164809 start.go:240] waiting for startup goroutines ...
	I0617 12:08:05.108160  164809 start.go:245] waiting for cluster config update ...
	I0617 12:08:05.108173  164809 start.go:254] writing updated cluster config ...
	I0617 12:08:05.108496  164809 ssh_runner.go:195] Run: rm -f paused
	I0617 12:08:05.160610  164809 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 12:08:05.162777  164809 out.go:177] * Done! kubectl is now configured to use "no-preload-152830" cluster and "default" namespace by default
	I0617 12:08:40.686610  165698 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0617 12:08:40.686950  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:40.687194  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:08:45.687594  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:45.687820  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:08:55.688285  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:55.688516  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:15.689306  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:09:15.689556  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:55.688872  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:09:55.689162  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:55.689206  165698 kubeadm.go:309] 
	I0617 12:09:55.689284  165698 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0617 12:09:55.689342  165698 kubeadm.go:309] 		timed out waiting for the condition
	I0617 12:09:55.689354  165698 kubeadm.go:309] 
	I0617 12:09:55.689418  165698 kubeadm.go:309] 	This error is likely caused by:
	I0617 12:09:55.689480  165698 kubeadm.go:309] 		- The kubelet is not running
	I0617 12:09:55.689632  165698 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0617 12:09:55.689657  165698 kubeadm.go:309] 
	I0617 12:09:55.689791  165698 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0617 12:09:55.689844  165698 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0617 12:09:55.689916  165698 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0617 12:09:55.689926  165698 kubeadm.go:309] 
	I0617 12:09:55.690059  165698 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0617 12:09:55.690140  165698 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0617 12:09:55.690159  165698 kubeadm.go:309] 
	I0617 12:09:55.690258  165698 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0617 12:09:55.690343  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0617 12:09:55.690434  165698 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0617 12:09:55.690530  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0617 12:09:55.690546  165698 kubeadm.go:309] 
	I0617 12:09:55.691495  165698 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:09:55.691595  165698 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0617 12:09:55.691708  165698 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0617 12:09:55.691787  165698 kubeadm.go:393] duration metric: took 7m57.151326537s to StartCluster
	I0617 12:09:55.691844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:09:55.691904  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:09:55.746514  165698 cri.go:89] found id: ""
	I0617 12:09:55.746550  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.746563  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:09:55.746572  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:09:55.746636  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:09:55.789045  165698 cri.go:89] found id: ""
	I0617 12:09:55.789083  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.789095  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:09:55.789103  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:09:55.789169  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:09:55.829492  165698 cri.go:89] found id: ""
	I0617 12:09:55.829533  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.829542  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:09:55.829547  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:09:55.829614  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:09:55.865213  165698 cri.go:89] found id: ""
	I0617 12:09:55.865246  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.865262  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:09:55.865267  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:09:55.865318  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:09:55.904067  165698 cri.go:89] found id: ""
	I0617 12:09:55.904102  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.904113  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:09:55.904122  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:09:55.904187  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:09:55.938441  165698 cri.go:89] found id: ""
	I0617 12:09:55.938471  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.938478  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:09:55.938487  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:09:55.938538  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:09:55.975669  165698 cri.go:89] found id: ""
	I0617 12:09:55.975710  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.975723  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:09:55.975731  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:09:55.975804  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:09:56.015794  165698 cri.go:89] found id: ""
	I0617 12:09:56.015826  165698 logs.go:276] 0 containers: []
	W0617 12:09:56.015837  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:09:56.015851  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:09:56.015868  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:09:56.095533  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:09:56.095557  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:09:56.095573  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:09:56.220817  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:09:56.220857  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:09:56.261470  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:09:56.261507  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:09:56.325626  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:09:56.325673  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0617 12:09:56.345438  165698 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0617 12:09:56.345491  165698 out.go:239] * 
	W0617 12:09:56.345606  165698 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0617 12:09:56.345635  165698 out.go:239] * 
	W0617 12:09:56.346583  165698 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 12:09:56.349928  165698 out.go:177] 
	W0617 12:09:56.351067  165698 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0617 12:09:56.351127  165698 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0617 12:09:56.351157  165698 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0617 12:09:56.352487  165698 out.go:177] 
	
	
	==> CRI-O <==
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.158802752Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626198158776714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0b5fd47-010e-44ec-997f-641742dab8b6 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.159416185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=321ab9cb-1e5f-46f2-a994-2800ef6fde96 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.159478749Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=321ab9cb-1e5f-46f2-a994-2800ef6fde96 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.159515131Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=321ab9cb-1e5f-46f2-a994-2800ef6fde96 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.188990655Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9264c17-d594-4114-bd79-39b4576e0d45 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.189130388Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9264c17-d594-4114-bd79-39b4576e0d45 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.193630465Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e9c479c-fb89-498a-9456-ecda86c3faad name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.194648936Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626198194568175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e9c479c-fb89-498a-9456-ecda86c3faad name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.195328080Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14fa75cf-b9f4-4e31-aa85-a4465fb8ba2f name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.195480914Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14fa75cf-b9f4-4e31-aa85-a4465fb8ba2f name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.195568015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=14fa75cf-b9f4-4e31-aa85-a4465fb8ba2f name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.229141641Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c9becfd-d888-4526-b8c5-dc89e946baae name=/runtime.v1.RuntimeService/Version
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.229225299Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c9becfd-d888-4526-b8c5-dc89e946baae name=/runtime.v1.RuntimeService/Version
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.230965167Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33583591-6d3c-430a-808f-860266d69d1d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.231437679Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626198231413380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33583591-6d3c-430a-808f-860266d69d1d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.231925497Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1d43d98-522c-43e9-b021-e27bd147db05 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.231973946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1d43d98-522c-43e9-b021-e27bd147db05 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.232006406Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e1d43d98-522c-43e9-b021-e27bd147db05 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.266868737Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70195781-4d72-4dc0-a9ed-4aee7588898d name=/runtime.v1.RuntimeService/Version
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.267017779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70195781-4d72-4dc0-a9ed-4aee7588898d name=/runtime.v1.RuntimeService/Version
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.268527679Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6574324-7a77-426f-b625-8928cc43115a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.268894416Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626198268872304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6574324-7a77-426f-b625-8928cc43115a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.270159218Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33d408b1-b1f3-48e7-a37a-52fe40c24457 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.270243076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33d408b1-b1f3-48e7-a37a-52fe40c24457 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:09:58 old-k8s-version-003661 crio[648]: time="2024-06-17 12:09:58.270276834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=33d408b1-b1f3-48e7-a37a-52fe40c24457 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jun17 12:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052255] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040891] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.660385] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.359181] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.617809] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.763068] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.058957] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067517] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.195874] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.192469] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.318746] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.241976] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.062935] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.770270] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	[Jun17 12:02] kauditd_printk_skb: 46 callbacks suppressed
	[Jun17 12:06] systemd-fstab-generator[5023]: Ignoring "noauto" option for root device
	[Jun17 12:08] systemd-fstab-generator[5303]: Ignoring "noauto" option for root device
	[  +0.068765] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:09:58 up 8 min,  0 users,  load average: 0.03, 0.10, 0.05
	Linux old-k8s-version-003661 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 17 12:09:55 old-k8s-version-003661 kubelet[5481]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc0009935f0)
	Jun 17 12:09:55 old-k8s-version-003661 kubelet[5481]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jun 17 12:09:55 old-k8s-version-003661 kubelet[5481]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jun 17 12:09:55 old-k8s-version-003661 kubelet[5481]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jun 17 12:09:55 old-k8s-version-003661 kubelet[5481]: goroutine 151 [select]:
	Jun 17 12:09:55 old-k8s-version-003661 kubelet[5481]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00091fef0, 0x4f0ac20, 0xc000baa7d0, 0x1, 0xc0001000c0)
	Jun 17 12:09:55 old-k8s-version-003661 kubelet[5481]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jun 17 12:09:55 old-k8s-version-003661 kubelet[5481]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0002547e0, 0xc0001000c0)
	Jun 17 12:09:55 old-k8s-version-003661 kubelet[5481]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jun 17 12:09:55 old-k8s-version-003661 kubelet[5481]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jun 17 12:09:55 old-k8s-version-003661 kubelet[5481]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jun 17 12:09:55 old-k8s-version-003661 kubelet[5481]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc00099aed0, 0xc000a3d240)
	Jun 17 12:09:55 old-k8s-version-003661 kubelet[5481]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jun 17 12:09:55 old-k8s-version-003661 kubelet[5481]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jun 17 12:09:55 old-k8s-version-003661 kubelet[5481]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jun 17 12:09:55 old-k8s-version-003661 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 17 12:09:55 old-k8s-version-003661 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 17 12:09:56 old-k8s-version-003661 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jun 17 12:09:56 old-k8s-version-003661 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 17 12:09:56 old-k8s-version-003661 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 17 12:09:56 old-k8s-version-003661 kubelet[5546]: I0617 12:09:56.376728    5546 server.go:416] Version: v1.20.0
	Jun 17 12:09:56 old-k8s-version-003661 kubelet[5546]: I0617 12:09:56.377349    5546 server.go:837] Client rotation is on, will bootstrap in background
	Jun 17 12:09:56 old-k8s-version-003661 kubelet[5546]: I0617 12:09:56.379798    5546 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 17 12:09:56 old-k8s-version-003661 kubelet[5546]: W0617 12:09:56.383948    5546 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jun 17 12:09:56 old-k8s-version-003661 kubelet[5546]: I0617 12:09:56.386897    5546 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-003661 -n old-k8s-version-003661
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-003661 -n old-k8s-version-003661: exit status 2 (233.251805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-003661" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (701.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309: exit status 3 (3.167713102s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:59:28.167820  165976 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.125:22: connect: no route to host
	E0617 11:59:28.167846  165976 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.125:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-991309 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-991309 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153328014s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.125:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-991309 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309: exit status 3 (3.062498268s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 11:59:37.383943  166057 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.125:22: connect: no route to host
	E0617 11:59:37.383967  166057 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.125:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-991309" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0617 12:06:51.170277  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-136195 -n embed-certs-136195
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-06-17 12:15:09.122968341 +0000 UTC m=+5454.966526478
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-136195 -n embed-certs-136195
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-136195 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-136195 logs -n 25: (2.028950934s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-514753                              | cert-expiration-514753       | jenkins | v1.33.1 | 17 Jun 24 11:52 UTC | 17 Jun 24 11:52 UTC |
	| start   | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:52 UTC | 17 Jun 24 11:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-152830             | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-152830                                   | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-136195            | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:55 UTC |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	| delete  | -p                                                     | disable-driver-mounts-960277 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | disable-driver-mounts-960277                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:56 UTC |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-152830                  | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-152830                                   | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC | 17 Jun 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-136195                 | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-003661        | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC | 17 Jun 24 12:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-991309  | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:57 UTC | 17 Jun 24 11:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:57 UTC |                     |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-003661                              | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC | 17 Jun 24 11:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-003661             | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC | 17 Jun 24 11:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-003661                              | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-991309       | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:59 UTC | 17 Jun 24 12:06 UTC |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 11:59:37
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 11:59:37.428028  166103 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:59:37.428266  166103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:59:37.428274  166103 out.go:304] Setting ErrFile to fd 2...
	I0617 11:59:37.428279  166103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:59:37.428472  166103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:59:37.429026  166103 out.go:298] Setting JSON to false
	I0617 11:59:37.429968  166103 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6124,"bootTime":1718619453,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:59:37.430026  166103 start.go:139] virtualization: kvm guest
	I0617 11:59:37.432171  166103 out.go:177] * [default-k8s-diff-port-991309] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:59:37.433521  166103 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:59:37.433548  166103 notify.go:220] Checking for updates...
	I0617 11:59:37.434850  166103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:59:37.436099  166103 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:59:37.437362  166103 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:59:37.438535  166103 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:59:37.439644  166103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:59:37.441113  166103 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:59:37.441563  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:59:37.441645  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:59:37.456875  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45565
	I0617 11:59:37.457306  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:59:37.457839  166103 main.go:141] libmachine: Using API Version  1
	I0617 11:59:37.457861  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:59:37.458188  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:59:37.458381  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 11:59:37.458626  166103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:59:37.458927  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:59:37.458971  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:59:37.474024  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45165
	I0617 11:59:37.474411  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:59:37.474873  166103 main.go:141] libmachine: Using API Version  1
	I0617 11:59:37.474899  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:59:37.475199  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:59:37.475383  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 11:59:37.507955  166103 out.go:177] * Using the kvm2 driver based on existing profile
	I0617 11:59:37.509134  166103 start.go:297] selected driver: kvm2
	I0617 11:59:37.509148  166103 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:59:37.509249  166103 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:59:37.509927  166103 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:59:37.510004  166103 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 11:59:37.525340  166103 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 11:59:37.525701  166103 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:59:37.525761  166103 cni.go:84] Creating CNI manager for ""
	I0617 11:59:37.525779  166103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:59:37.525812  166103 start.go:340] cluster config:
	{Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:59:37.525910  166103 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:59:37.527756  166103 out.go:177] * Starting "default-k8s-diff-port-991309" primary control-plane node in "default-k8s-diff-port-991309" cluster
	I0617 11:59:36.391800  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:37.529104  166103 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:59:37.529159  166103 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 11:59:37.529171  166103 cache.go:56] Caching tarball of preloaded images
	I0617 11:59:37.529246  166103 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:59:37.529256  166103 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 11:59:37.529368  166103 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/config.json ...
	I0617 11:59:37.529565  166103 start.go:360] acquireMachinesLock for default-k8s-diff-port-991309: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:59:42.471684  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:45.543735  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:51.623725  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:54.695811  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:00.775775  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:03.847736  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:09.927768  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:12.999728  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:19.079809  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:22.151737  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:28.231763  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:31.303775  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:37.383783  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:40.455809  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:46.535757  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:49.607769  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:55.687772  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:58.759722  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:01:04.839736  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:01:07.911780  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:01:10.916735  165060 start.go:364] duration metric: took 4m27.471308215s to acquireMachinesLock for "embed-certs-136195"
	I0617 12:01:10.916814  165060 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:10.916827  165060 fix.go:54] fixHost starting: 
	I0617 12:01:10.917166  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:10.917203  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:10.932217  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43235
	I0617 12:01:10.932742  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:10.933241  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:10.933261  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:10.933561  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:10.933766  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:10.933939  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:10.935452  165060 fix.go:112] recreateIfNeeded on embed-certs-136195: state=Stopped err=<nil>
	I0617 12:01:10.935660  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	W0617 12:01:10.935831  165060 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:10.937510  165060 out.go:177] * Restarting existing kvm2 VM for "embed-certs-136195" ...
	I0617 12:01:10.938708  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Start
	I0617 12:01:10.938873  165060 main.go:141] libmachine: (embed-certs-136195) Ensuring networks are active...
	I0617 12:01:10.939602  165060 main.go:141] libmachine: (embed-certs-136195) Ensuring network default is active
	I0617 12:01:10.939896  165060 main.go:141] libmachine: (embed-certs-136195) Ensuring network mk-embed-certs-136195 is active
	I0617 12:01:10.940260  165060 main.go:141] libmachine: (embed-certs-136195) Getting domain xml...
	I0617 12:01:10.940881  165060 main.go:141] libmachine: (embed-certs-136195) Creating domain...
	I0617 12:01:12.136267  165060 main.go:141] libmachine: (embed-certs-136195) Waiting to get IP...
	I0617 12:01:12.137303  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:12.137692  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:12.137777  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:12.137684  166451 retry.go:31] will retry after 261.567272ms: waiting for machine to come up
	I0617 12:01:12.401390  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:12.401845  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:12.401873  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:12.401816  166451 retry.go:31] will retry after 332.256849ms: waiting for machine to come up
	I0617 12:01:12.735421  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:12.735842  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:12.735872  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:12.735783  166451 retry.go:31] will retry after 457.313241ms: waiting for machine to come up
	I0617 12:01:13.194621  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:13.195073  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:13.195091  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:13.195036  166451 retry.go:31] will retry after 539.191177ms: waiting for machine to come up
	I0617 12:01:10.914315  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:10.914353  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:01:10.914690  164809 buildroot.go:166] provisioning hostname "no-preload-152830"
	I0617 12:01:10.914716  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:01:10.914905  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:01:10.916557  164809 machine.go:97] duration metric: took 4m37.418351206s to provisionDockerMachine
	I0617 12:01:10.916625  164809 fix.go:56] duration metric: took 4m37.438694299s for fixHost
	I0617 12:01:10.916634  164809 start.go:83] releasing machines lock for "no-preload-152830", held for 4m37.438726092s
	W0617 12:01:10.916653  164809 start.go:713] error starting host: provision: host is not running
	W0617 12:01:10.916750  164809 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0617 12:01:10.916763  164809 start.go:728] Will try again in 5 seconds ...
	I0617 12:01:13.735708  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:13.736155  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:13.736184  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:13.736096  166451 retry.go:31] will retry after 754.965394ms: waiting for machine to come up
	I0617 12:01:14.493211  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:14.493598  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:14.493628  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:14.493544  166451 retry.go:31] will retry after 786.125188ms: waiting for machine to come up
	I0617 12:01:15.281505  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:15.281975  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:15.282008  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:15.281939  166451 retry.go:31] will retry after 1.091514617s: waiting for machine to come up
	I0617 12:01:16.375391  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:16.375904  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:16.375935  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:16.375820  166451 retry.go:31] will retry after 1.34601641s: waiting for machine to come up
	I0617 12:01:17.724108  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:17.724453  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:17.724477  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:17.724418  166451 retry.go:31] will retry after 1.337616605s: waiting for machine to come up
	I0617 12:01:15.918256  164809 start.go:360] acquireMachinesLock for no-preload-152830: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 12:01:19.063677  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:19.064210  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:19.064243  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:19.064144  166451 retry.go:31] will retry after 1.914267639s: waiting for machine to come up
	I0617 12:01:20.979644  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:20.980124  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:20.980150  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:20.980072  166451 retry.go:31] will retry after 2.343856865s: waiting for machine to come up
	I0617 12:01:23.326506  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:23.326878  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:23.326922  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:23.326861  166451 retry.go:31] will retry after 2.450231017s: waiting for machine to come up
	I0617 12:01:25.780501  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:25.780886  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:25.780913  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:25.780825  166451 retry.go:31] will retry after 3.591107926s: waiting for machine to come up
	I0617 12:01:30.728529  165698 start.go:364] duration metric: took 3m12.647041864s to acquireMachinesLock for "old-k8s-version-003661"
	I0617 12:01:30.728602  165698 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:30.728613  165698 fix.go:54] fixHost starting: 
	I0617 12:01:30.729036  165698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:30.729090  165698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:30.746528  165698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0617 12:01:30.746982  165698 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:30.747493  165698 main.go:141] libmachine: Using API Version  1
	I0617 12:01:30.747516  165698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:30.747847  165698 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:30.748060  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:30.748186  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetState
	I0617 12:01:30.750035  165698 fix.go:112] recreateIfNeeded on old-k8s-version-003661: state=Stopped err=<nil>
	I0617 12:01:30.750072  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	W0617 12:01:30.750206  165698 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:30.752196  165698 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-003661" ...
	I0617 12:01:29.375875  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.376372  165060 main.go:141] libmachine: (embed-certs-136195) Found IP for machine: 192.168.72.199
	I0617 12:01:29.376407  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has current primary IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.376430  165060 main.go:141] libmachine: (embed-certs-136195) Reserving static IP address...
	I0617 12:01:29.376754  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "embed-certs-136195", mac: "52:54:00:f2:27:84", ip: "192.168.72.199"} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.376788  165060 main.go:141] libmachine: (embed-certs-136195) Reserved static IP address: 192.168.72.199
	I0617 12:01:29.376800  165060 main.go:141] libmachine: (embed-certs-136195) DBG | skip adding static IP to network mk-embed-certs-136195 - found existing host DHCP lease matching {name: "embed-certs-136195", mac: "52:54:00:f2:27:84", ip: "192.168.72.199"}
	I0617 12:01:29.376811  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Getting to WaitForSSH function...
	I0617 12:01:29.376820  165060 main.go:141] libmachine: (embed-certs-136195) Waiting for SSH to be available...
	I0617 12:01:29.378811  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.379121  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.379151  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.379289  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Using SSH client type: external
	I0617 12:01:29.379321  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa (-rw-------)
	I0617 12:01:29.379354  165060 main.go:141] libmachine: (embed-certs-136195) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.199 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:01:29.379368  165060 main.go:141] libmachine: (embed-certs-136195) DBG | About to run SSH command:
	I0617 12:01:29.379381  165060 main.go:141] libmachine: (embed-certs-136195) DBG | exit 0
	I0617 12:01:29.503819  165060 main.go:141] libmachine: (embed-certs-136195) DBG | SSH cmd err, output: <nil>: 
	I0617 12:01:29.504207  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetConfigRaw
	I0617 12:01:29.504827  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:29.507277  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.507601  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.507635  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.507878  165060 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/config.json ...
	I0617 12:01:29.508102  165060 machine.go:94] provisionDockerMachine start ...
	I0617 12:01:29.508125  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:29.508333  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.510390  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.510636  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.510656  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.510761  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:29.510924  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.511082  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.511242  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:29.511404  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:29.511665  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:29.511680  165060 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:01:29.611728  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:01:29.611759  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetMachineName
	I0617 12:01:29.611996  165060 buildroot.go:166] provisioning hostname "embed-certs-136195"
	I0617 12:01:29.612025  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetMachineName
	I0617 12:01:29.612194  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.614719  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.615085  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.615110  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.615251  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:29.615425  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.615565  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.615685  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:29.615881  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:29.616066  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:29.616084  165060 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-136195 && echo "embed-certs-136195" | sudo tee /etc/hostname
	I0617 12:01:29.729321  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-136195
	
	I0617 12:01:29.729347  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.731968  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.732314  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.732352  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.732582  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:29.732820  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.733001  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.733157  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:29.733312  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:29.733471  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:29.733487  165060 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-136195' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-136195/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-136195' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:01:29.840083  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:29.840110  165060 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:01:29.840145  165060 buildroot.go:174] setting up certificates
	I0617 12:01:29.840180  165060 provision.go:84] configureAuth start
	I0617 12:01:29.840199  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetMachineName
	I0617 12:01:29.840488  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:29.843096  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.843446  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.843487  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.843687  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.845627  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.845914  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.845940  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.846021  165060 provision.go:143] copyHostCerts
	I0617 12:01:29.846096  165060 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:01:29.846106  165060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:01:29.846171  165060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:01:29.846267  165060 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:01:29.846275  165060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:01:29.846298  165060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:01:29.846359  165060 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:01:29.846366  165060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:01:29.846387  165060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:01:29.846456  165060 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.embed-certs-136195 san=[127.0.0.1 192.168.72.199 embed-certs-136195 localhost minikube]
	I0617 12:01:30.076596  165060 provision.go:177] copyRemoteCerts
	I0617 12:01:30.076657  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:01:30.076686  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.079269  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.079565  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.079588  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.079785  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.080016  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.080189  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.080316  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.161615  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:01:30.188790  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0617 12:01:30.215171  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:01:30.241310  165060 provision.go:87] duration metric: took 401.115469ms to configureAuth
	I0617 12:01:30.241332  165060 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:01:30.241529  165060 config.go:182] Loaded profile config "embed-certs-136195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:01:30.241602  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.244123  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.244427  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.244459  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.244584  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.244793  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.244999  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.245174  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.245340  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:30.245497  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:30.245512  165060 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:01:30.498156  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:01:30.498189  165060 machine.go:97] duration metric: took 990.071076ms to provisionDockerMachine
	I0617 12:01:30.498201  165060 start.go:293] postStartSetup for "embed-certs-136195" (driver="kvm2")
	I0617 12:01:30.498214  165060 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:01:30.498238  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.498580  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:01:30.498605  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.501527  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.501912  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.501941  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.502054  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.502257  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.502423  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.502578  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.583151  165060 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:01:30.587698  165060 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:01:30.587722  165060 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:01:30.587819  165060 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:01:30.587940  165060 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:01:30.588078  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:01:30.598234  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:30.622580  165060 start.go:296] duration metric: took 124.363651ms for postStartSetup
	I0617 12:01:30.622621  165060 fix.go:56] duration metric: took 19.705796191s for fixHost
	I0617 12:01:30.622645  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.625226  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.625637  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.625684  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.625821  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.626040  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.626229  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.626418  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.626613  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:30.626839  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:30.626862  165060 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:01:30.728365  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625690.704643527
	
	I0617 12:01:30.728389  165060 fix.go:216] guest clock: 1718625690.704643527
	I0617 12:01:30.728396  165060 fix.go:229] Guest: 2024-06-17 12:01:30.704643527 +0000 UTC Remote: 2024-06-17 12:01:30.622625631 +0000 UTC m=+287.310804086 (delta=82.017896ms)
	I0617 12:01:30.728416  165060 fix.go:200] guest clock delta is within tolerance: 82.017896ms
	I0617 12:01:30.728421  165060 start.go:83] releasing machines lock for "embed-certs-136195", held for 19.811634749s
	I0617 12:01:30.728445  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.728763  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:30.731414  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.731784  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.731816  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.731937  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.732504  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.732704  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.732761  165060 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:01:30.732826  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.732964  165060 ssh_runner.go:195] Run: cat /version.json
	I0617 12:01:30.732991  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.735854  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736049  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736278  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.736310  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.736334  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736397  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736579  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.736653  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.736777  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.736959  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.736972  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.737131  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.737188  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.737356  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.844295  165060 ssh_runner.go:195] Run: systemctl --version
	I0617 12:01:30.851958  165060 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:01:31.000226  165060 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:01:31.008322  165060 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:01:31.008397  165060 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:01:31.029520  165060 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:01:31.029547  165060 start.go:494] detecting cgroup driver to use...
	I0617 12:01:31.029617  165060 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:01:31.045505  165060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:01:31.059851  165060 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:01:31.059920  165060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:01:31.075011  165060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:01:31.089705  165060 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:01:31.204300  165060 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:01:31.342204  165060 docker.go:233] disabling docker service ...
	I0617 12:01:31.342290  165060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:01:31.356945  165060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:01:31.369786  165060 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:01:31.505817  165060 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:01:31.631347  165060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:01:31.646048  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:01:31.664854  165060 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:01:31.664923  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.677595  165060 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:01:31.677678  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.690164  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.701482  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.712488  165060 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:01:31.723994  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.736805  165060 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.755001  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.767226  165060 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:01:31.777894  165060 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:01:31.777954  165060 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:01:31.792644  165060 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:01:31.803267  165060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:31.920107  165060 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:01:32.067833  165060 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:01:32.067904  165060 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:01:32.072818  165060 start.go:562] Will wait 60s for crictl version
	I0617 12:01:32.072881  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:01:32.076782  165060 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:01:32.116635  165060 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:01:32.116709  165060 ssh_runner.go:195] Run: crio --version
	I0617 12:01:32.148094  165060 ssh_runner.go:195] Run: crio --version
	I0617 12:01:32.176924  165060 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:01:30.753437  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .Start
	I0617 12:01:30.753608  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring networks are active...
	I0617 12:01:30.754272  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring network default is active
	I0617 12:01:30.754600  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring network mk-old-k8s-version-003661 is active
	I0617 12:01:30.754967  165698 main.go:141] libmachine: (old-k8s-version-003661) Getting domain xml...
	I0617 12:01:30.755739  165698 main.go:141] libmachine: (old-k8s-version-003661) Creating domain...
	I0617 12:01:32.029080  165698 main.go:141] libmachine: (old-k8s-version-003661) Waiting to get IP...
	I0617 12:01:32.029902  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.030401  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.030477  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.030384  166594 retry.go:31] will retry after 191.846663ms: waiting for machine to come up
	I0617 12:01:32.223912  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.224300  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.224328  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.224276  166594 retry.go:31] will retry after 341.806498ms: waiting for machine to come up
	I0617 12:01:32.568066  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.568648  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.568682  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.568575  166594 retry.go:31] will retry after 359.779948ms: waiting for machine to come up
	I0617 12:01:32.930210  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.930652  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.930675  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.930604  166594 retry.go:31] will retry after 548.549499ms: waiting for machine to come up
	I0617 12:01:32.178076  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:32.181127  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:32.181524  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:32.181553  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:32.181778  165060 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0617 12:01:32.186998  165060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:32.203033  165060 kubeadm.go:877] updating cluster {Name:embed-certs-136195 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-136195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:01:32.203142  165060 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:01:32.203183  165060 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:32.245712  165060 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:01:32.245796  165060 ssh_runner.go:195] Run: which lz4
	I0617 12:01:32.250113  165060 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:01:32.254486  165060 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:01:32.254511  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0617 12:01:33.480493  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:33.480965  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:33.481004  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:33.480931  166594 retry.go:31] will retry after 636.044066ms: waiting for machine to come up
	I0617 12:01:34.118880  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:34.119361  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:34.119394  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:34.119299  166594 retry.go:31] will retry after 637.085777ms: waiting for machine to come up
	I0617 12:01:34.757614  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:34.758097  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:34.758126  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:34.758051  166594 retry.go:31] will retry after 921.652093ms: waiting for machine to come up
	I0617 12:01:35.681846  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:35.682324  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:35.682351  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:35.682269  166594 retry.go:31] will retry after 1.1106801s: waiting for machine to come up
	I0617 12:01:36.794411  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:36.794845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:36.794869  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:36.794793  166594 retry.go:31] will retry after 1.323395845s: waiting for machine to come up
	I0617 12:01:33.776867  165060 crio.go:462] duration metric: took 1.526763522s to copy over tarball
	I0617 12:01:33.776955  165060 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:01:35.994216  165060 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.217222149s)
	I0617 12:01:35.994246  165060 crio.go:469] duration metric: took 2.217348025s to extract the tarball
	I0617 12:01:35.994255  165060 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:01:36.034978  165060 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:36.087255  165060 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 12:01:36.087281  165060 cache_images.go:84] Images are preloaded, skipping loading
	I0617 12:01:36.087291  165060 kubeadm.go:928] updating node { 192.168.72.199 8443 v1.30.1 crio true true} ...
	I0617 12:01:36.087447  165060 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-136195 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-136195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:01:36.087551  165060 ssh_runner.go:195] Run: crio config
	I0617 12:01:36.130409  165060 cni.go:84] Creating CNI manager for ""
	I0617 12:01:36.130433  165060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:36.130449  165060 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:01:36.130479  165060 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.199 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-136195 NodeName:embed-certs-136195 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:01:36.130633  165060 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-136195"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.199
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.199"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:01:36.130724  165060 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:01:36.141027  165060 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:01:36.141110  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:01:36.150748  165060 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0617 12:01:36.167282  165060 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:01:36.183594  165060 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0617 12:01:36.202494  165060 ssh_runner.go:195] Run: grep 192.168.72.199	control-plane.minikube.internal$ /etc/hosts
	I0617 12:01:36.206515  165060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:36.218598  165060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:36.344280  165060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:36.361127  165060 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195 for IP: 192.168.72.199
	I0617 12:01:36.361152  165060 certs.go:194] generating shared ca certs ...
	I0617 12:01:36.361172  165060 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:36.361370  165060 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:01:36.361425  165060 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:01:36.361438  165060 certs.go:256] generating profile certs ...
	I0617 12:01:36.361557  165060 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/client.key
	I0617 12:01:36.361648  165060 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/apiserver.key.f7068429
	I0617 12:01:36.361696  165060 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/proxy-client.key
	I0617 12:01:36.361863  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:01:36.361913  165060 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:01:36.361925  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:01:36.361951  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:01:36.361984  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:01:36.362005  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:01:36.362041  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:36.362770  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:01:36.397257  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:01:36.422523  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:01:36.451342  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:01:36.485234  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0617 12:01:36.514351  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:01:36.544125  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:01:36.567574  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:01:36.590417  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:01:36.613174  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:01:36.636187  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:01:36.659365  165060 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:01:36.675981  165060 ssh_runner.go:195] Run: openssl version
	I0617 12:01:36.681694  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:01:36.692324  165060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:01:36.696871  165060 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:01:36.696938  165060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:01:36.702794  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:01:36.713372  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:01:36.724054  165060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:01:36.728505  165060 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:01:36.728566  165060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:01:36.734082  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:01:36.744542  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:01:36.755445  165060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:36.759880  165060 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:36.759922  165060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:36.765367  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:01:36.776234  165060 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:01:36.780822  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:01:36.786895  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:01:36.793358  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:01:36.800187  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:01:36.806591  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:01:36.812681  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:01:36.818814  165060 kubeadm.go:391] StartCluster: {Name:embed-certs-136195 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-136195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:01:36.818903  165060 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:01:36.818945  165060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:36.861839  165060 cri.go:89] found id: ""
	I0617 12:01:36.861920  165060 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:01:36.873500  165060 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:01:36.873529  165060 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:01:36.873551  165060 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:01:36.873602  165060 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:01:36.884767  165060 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:01:36.886013  165060 kubeconfig.go:125] found "embed-certs-136195" server: "https://192.168.72.199:8443"
	I0617 12:01:36.888144  165060 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:01:36.899204  165060 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.199
	I0617 12:01:36.899248  165060 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:01:36.899263  165060 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:01:36.899325  165060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:36.941699  165060 cri.go:89] found id: ""
	I0617 12:01:36.941782  165060 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:01:36.960397  165060 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:01:36.971254  165060 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:01:36.971276  165060 kubeadm.go:156] found existing configuration files:
	
	I0617 12:01:36.971333  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:01:36.981367  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:01:36.981448  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:01:36.991878  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:01:37.001741  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:01:37.001816  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:01:37.012170  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:01:37.021914  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:01:37.021979  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:01:37.031866  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:01:37.041657  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:01:37.041706  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:01:37.051440  165060 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:01:37.062543  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:37.175190  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:37.872053  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:38.085732  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:38.146895  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:38.208633  165060 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:01:38.208898  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:38.119805  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:38.297858  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:38.297905  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:38.120293  166594 retry.go:31] will retry after 1.769592858s: waiting for machine to come up
	I0617 12:01:39.892495  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:39.893035  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:39.893065  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:39.892948  166594 retry.go:31] will retry after 1.954570801s: waiting for machine to come up
	I0617 12:01:41.849587  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:41.850111  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:41.850140  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:41.850067  166594 retry.go:31] will retry after 3.44879626s: waiting for machine to come up
	I0617 12:01:38.708936  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:39.209014  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:39.709765  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:39.728309  165060 api_server.go:72] duration metric: took 1.519672652s to wait for apiserver process to appear ...
	I0617 12:01:39.728342  165060 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:01:39.728369  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:42.756054  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:01:42.756089  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:01:42.756105  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:42.797646  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:01:42.797689  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:01:43.229201  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:43.233440  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:01:43.233467  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:01:43.728490  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:43.741000  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:01:43.741037  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:01:44.228634  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:44.232839  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 200:
	ok
	I0617 12:01:44.238582  165060 api_server.go:141] control plane version: v1.30.1
	I0617 12:01:44.238606  165060 api_server.go:131] duration metric: took 4.510256755s to wait for apiserver health ...
	I0617 12:01:44.238615  165060 cni.go:84] Creating CNI manager for ""
	I0617 12:01:44.238622  165060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:44.240569  165060 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:01:44.241963  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:01:44.253143  165060 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:01:44.286772  165060 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:01:44.295697  165060 system_pods.go:59] 8 kube-system pods found
	I0617 12:01:44.295736  165060 system_pods.go:61] "coredns-7db6d8ff4d-9bbjg" [1ba0eee5-436e-4c83-b5ce-3c907d66b641] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0617 12:01:44.295744  165060 system_pods.go:61] "etcd-embed-certs-136195" [6dc81a80-c56b-4517-af82-c450cf9578f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0617 12:01:44.295757  165060 system_pods.go:61] "kube-apiserver-embed-certs-136195" [bd61a715-2471-4dca-aa48-a157531ebd6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 12:01:44.295763  165060 system_pods.go:61] "kube-controller-manager-embed-certs-136195" [194db4b0-75c2-4905-8e4d-813185497b51] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 12:01:44.295768  165060 system_pods.go:61] "kube-proxy-25d5n" [52b6d09a-899f-40c4-b1f3-7842ae755165] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0617 12:01:44.295774  165060 system_pods.go:61] "kube-scheduler-embed-certs-136195" [b04d3798-f465-4f82-9ec7-777ea62d5b94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 12:01:44.295782  165060 system_pods.go:61] "metrics-server-569cc877fc-dmhfs" [31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:01:44.295788  165060 system_pods.go:61] "storage-provisioner" [4b04a38a-5006-4496-a24d-0940029193de] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 12:01:44.295797  165060 system_pods.go:74] duration metric: took 9.004741ms to wait for pod list to return data ...
	I0617 12:01:44.295811  165060 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:01:44.298934  165060 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:01:44.298968  165060 node_conditions.go:123] node cpu capacity is 2
	I0617 12:01:44.298989  165060 node_conditions.go:105] duration metric: took 3.172465ms to run NodePressure ...
	I0617 12:01:44.299027  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:44.565943  165060 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 12:01:44.570796  165060 kubeadm.go:733] kubelet initialised
	I0617 12:01:44.570825  165060 kubeadm.go:734] duration metric: took 4.851024ms waiting for restarted kubelet to initialise ...
	I0617 12:01:44.570836  165060 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:01:44.575565  165060 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.582180  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.582209  165060 pod_ready.go:81] duration metric: took 6.620747ms for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.582221  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.582231  165060 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.586828  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "etcd-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.586850  165060 pod_ready.go:81] duration metric: took 4.61059ms for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.586859  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "etcd-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.586866  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.591162  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.591189  165060 pod_ready.go:81] duration metric: took 4.316651ms for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.591197  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.591204  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.690269  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.690301  165060 pod_ready.go:81] duration metric: took 99.088803ms for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.690310  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.690317  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:45.089616  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-proxy-25d5n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.089640  165060 pod_ready.go:81] duration metric: took 399.31511ms for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:45.089649  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-proxy-25d5n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.089656  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:45.491031  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.491058  165060 pod_ready.go:81] duration metric: took 401.395966ms for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:45.491068  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.491074  165060 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:45.890606  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.890633  165060 pod_ready.go:81] duration metric: took 399.550946ms for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:45.890644  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.890650  165060 pod_ready.go:38] duration metric: took 1.319802914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:01:45.890669  165060 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:01:45.903900  165060 ops.go:34] apiserver oom_adj: -16
	I0617 12:01:45.903936  165060 kubeadm.go:591] duration metric: took 9.03037731s to restartPrimaryControlPlane
	I0617 12:01:45.903950  165060 kubeadm.go:393] duration metric: took 9.085142288s to StartCluster
	I0617 12:01:45.903974  165060 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:45.904063  165060 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:01:45.905636  165060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:45.905908  165060 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:01:45.907817  165060 out.go:177] * Verifying Kubernetes components...
	I0617 12:01:45.905981  165060 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:01:45.907852  165060 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-136195"
	I0617 12:01:45.907880  165060 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-136195"
	W0617 12:01:45.907890  165060 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:01:45.907903  165060 addons.go:69] Setting default-storageclass=true in profile "embed-certs-136195"
	I0617 12:01:45.906085  165060 config.go:182] Loaded profile config "embed-certs-136195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:01:45.909296  165060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:45.907923  165060 host.go:66] Checking if "embed-certs-136195" exists ...
	I0617 12:01:45.907924  165060 addons.go:69] Setting metrics-server=true in profile "embed-certs-136195"
	I0617 12:01:45.909472  165060 addons.go:234] Setting addon metrics-server=true in "embed-certs-136195"
	W0617 12:01:45.909481  165060 addons.go:243] addon metrics-server should already be in state true
	I0617 12:01:45.909506  165060 host.go:66] Checking if "embed-certs-136195" exists ...
	I0617 12:01:45.907954  165060 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-136195"
	I0617 12:01:45.909776  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.909822  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.909836  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.909861  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.909841  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.909928  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.925250  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36545
	I0617 12:01:45.925500  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38767
	I0617 12:01:45.925708  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.925929  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.926262  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.926282  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.926420  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.926445  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.926637  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.926728  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.927142  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.927171  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.927206  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.927236  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.929198  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
	I0617 12:01:45.929658  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.930137  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.930159  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.930465  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.930661  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.934085  165060 addons.go:234] Setting addon default-storageclass=true in "embed-certs-136195"
	W0617 12:01:45.934107  165060 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:01:45.934139  165060 host.go:66] Checking if "embed-certs-136195" exists ...
	I0617 12:01:45.934534  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.934579  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.944472  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I0617 12:01:45.945034  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.945712  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.945741  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.946105  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.946343  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.946673  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43225
	I0617 12:01:45.947007  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.947706  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.947725  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.948027  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.948228  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.948359  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:45.950451  165060 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:01:45.951705  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:01:45.951719  165060 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:01:45.951735  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:45.949626  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:45.951588  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43695
	I0617 12:01:45.953222  165060 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:45.954471  165060 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:01:45.952290  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.954494  165060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:01:45.954514  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:45.955079  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.955098  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.955123  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.955478  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.955718  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:45.955757  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.955924  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:45.956099  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.956106  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:45.956147  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.956374  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:45.956507  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:45.957756  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.958184  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:45.958206  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.958335  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:45.958505  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:45.958680  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:45.958825  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:45.977247  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39751
	I0617 12:01:45.977663  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.978179  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.978203  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.978524  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.978711  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.980425  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:45.980601  165060 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:01:45.980616  165060 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:01:45.980630  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:45.983633  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.984088  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:45.984105  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.984258  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:45.984377  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:45.984505  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:45.984661  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:46.093292  165060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:46.112779  165060 node_ready.go:35] waiting up to 6m0s for node "embed-certs-136195" to be "Ready" ...
	I0617 12:01:46.182239  165060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:01:46.248534  165060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:01:46.286637  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:01:46.286662  165060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:01:46.313951  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:01:46.313981  165060 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:01:46.337155  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:01:46.337186  165060 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:01:46.389025  165060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:01:46.548086  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:46.548106  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:46.548442  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:46.548461  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:46.548471  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:46.548481  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:46.548485  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:46.548727  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:46.548744  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:46.548764  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:46.554199  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:46.554218  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:46.554454  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:46.554469  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:46.554480  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.142290  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.142321  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.142629  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.142658  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.142671  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.142676  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.142692  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.142943  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.142971  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.142985  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.216339  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.216366  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.216658  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.216679  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.216690  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.216700  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.216709  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.216931  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.216967  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.216982  165060 addons.go:475] Verifying addon metrics-server=true in "embed-certs-136195"
	I0617 12:01:47.219627  165060 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0617 12:01:45.300413  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:45.300848  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:45.300878  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:45.300794  166594 retry.go:31] will retry after 3.892148485s: waiting for machine to come up
	I0617 12:01:47.220905  165060 addons.go:510] duration metric: took 1.314925386s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0617 12:01:48.116197  165060 node_ready.go:53] node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:50.500448  166103 start.go:364] duration metric: took 2m12.970832528s to acquireMachinesLock for "default-k8s-diff-port-991309"
	I0617 12:01:50.500511  166103 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:50.500534  166103 fix.go:54] fixHost starting: 
	I0617 12:01:50.500980  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:50.501018  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:50.517593  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43641
	I0617 12:01:50.518035  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:50.518600  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:01:50.518635  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:50.519051  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:50.519296  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:01:50.519502  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:01:50.521095  166103 fix.go:112] recreateIfNeeded on default-k8s-diff-port-991309: state=Stopped err=<nil>
	I0617 12:01:50.521123  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	W0617 12:01:50.521307  166103 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:50.522795  166103 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-991309" ...
	I0617 12:01:49.197189  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.197671  165698 main.go:141] libmachine: (old-k8s-version-003661) Found IP for machine: 192.168.61.164
	I0617 12:01:49.197697  165698 main.go:141] libmachine: (old-k8s-version-003661) Reserving static IP address...
	I0617 12:01:49.197714  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has current primary IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.198147  165698 main.go:141] libmachine: (old-k8s-version-003661) Reserved static IP address: 192.168.61.164
	I0617 12:01:49.198175  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "old-k8s-version-003661", mac: "52:54:00:76:66:a0", ip: "192.168.61.164"} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.198185  165698 main.go:141] libmachine: (old-k8s-version-003661) Waiting for SSH to be available...
	I0617 12:01:49.198217  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | skip adding static IP to network mk-old-k8s-version-003661 - found existing host DHCP lease matching {name: "old-k8s-version-003661", mac: "52:54:00:76:66:a0", ip: "192.168.61.164"}
	I0617 12:01:49.198227  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Getting to WaitForSSH function...
	I0617 12:01:49.200478  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.200907  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.200935  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.201088  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Using SSH client type: external
	I0617 12:01:49.201116  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa (-rw-------)
	I0617 12:01:49.201154  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:01:49.201169  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | About to run SSH command:
	I0617 12:01:49.201183  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | exit 0
	I0617 12:01:49.323763  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | SSH cmd err, output: <nil>: 
	I0617 12:01:49.324127  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetConfigRaw
	I0617 12:01:49.324835  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:49.327217  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.327628  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.327660  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.327891  165698 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/config.json ...
	I0617 12:01:49.328097  165698 machine.go:94] provisionDockerMachine start ...
	I0617 12:01:49.328120  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:49.328365  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.330587  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.330992  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.331033  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.331160  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.331324  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.331490  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.331637  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.331824  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.332037  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.332049  165698 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:01:49.432170  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:01:49.432201  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.432498  165698 buildroot.go:166] provisioning hostname "old-k8s-version-003661"
	I0617 12:01:49.432524  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.432730  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.435845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.436276  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.436317  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.436507  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.436708  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.436909  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.437074  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.437289  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.437496  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.437510  165698 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-003661 && echo "old-k8s-version-003661" | sudo tee /etc/hostname
	I0617 12:01:49.550158  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-003661
	
	I0617 12:01:49.550187  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.553141  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.553509  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.553539  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.553737  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.553943  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.554141  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.554298  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.554520  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.554759  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.554787  165698 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-003661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-003661/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-003661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:01:49.661049  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:49.661079  165698 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:01:49.661106  165698 buildroot.go:174] setting up certificates
	I0617 12:01:49.661115  165698 provision.go:84] configureAuth start
	I0617 12:01:49.661124  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.661452  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:49.664166  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.664561  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.664591  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.664723  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.666845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.667114  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.667158  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.667287  165698 provision.go:143] copyHostCerts
	I0617 12:01:49.667377  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:01:49.667387  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:01:49.667440  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:01:49.667561  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:01:49.667571  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:01:49.667594  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:01:49.667649  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:01:49.667656  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:01:49.667674  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:01:49.667722  165698 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-003661 san=[127.0.0.1 192.168.61.164 localhost minikube old-k8s-version-003661]
	I0617 12:01:49.853671  165698 provision.go:177] copyRemoteCerts
	I0617 12:01:49.853736  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:01:49.853767  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.856171  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.856540  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.856577  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.856737  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.857071  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.857220  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.857360  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:49.938626  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:01:49.964401  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0617 12:01:49.988397  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0617 12:01:50.013356  165698 provision.go:87] duration metric: took 352.227211ms to configureAuth
	I0617 12:01:50.013382  165698 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:01:50.013581  165698 config.go:182] Loaded profile config "old-k8s-version-003661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0617 12:01:50.013689  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.016168  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.016514  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.016548  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.016657  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.016847  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.017025  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.017152  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.017300  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:50.017483  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:50.017505  165698 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:01:50.280037  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:01:50.280065  165698 machine.go:97] duration metric: took 951.954687ms to provisionDockerMachine
	I0617 12:01:50.280076  165698 start.go:293] postStartSetup for "old-k8s-version-003661" (driver="kvm2")
	I0617 12:01:50.280086  165698 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:01:50.280102  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.280467  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:01:50.280506  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.283318  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.283657  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.283684  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.283874  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.284106  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.284279  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.284402  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.362452  165698 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:01:50.366699  165698 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:01:50.366726  165698 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:01:50.366788  165698 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:01:50.366878  165698 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:01:50.367004  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:01:50.376706  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:50.399521  165698 start.go:296] duration metric: took 119.43167ms for postStartSetup
	I0617 12:01:50.399558  165698 fix.go:56] duration metric: took 19.670946478s for fixHost
	I0617 12:01:50.399578  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.402079  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.402465  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.402500  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.402649  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.402835  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.402994  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.403138  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.403321  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:50.403529  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:50.403541  165698 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:01:50.500267  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625710.471154465
	
	I0617 12:01:50.500294  165698 fix.go:216] guest clock: 1718625710.471154465
	I0617 12:01:50.500304  165698 fix.go:229] Guest: 2024-06-17 12:01:50.471154465 +0000 UTC Remote: 2024-06-17 12:01:50.399561534 +0000 UTC m=+212.458541959 (delta=71.592931ms)
	I0617 12:01:50.500350  165698 fix.go:200] guest clock delta is within tolerance: 71.592931ms
	I0617 12:01:50.500355  165698 start.go:83] releasing machines lock for "old-k8s-version-003661", held for 19.771784344s
	I0617 12:01:50.500380  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.500648  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:50.503346  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.503749  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.503776  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.503974  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504536  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504676  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504750  165698 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:01:50.504801  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.504861  165698 ssh_runner.go:195] Run: cat /version.json
	I0617 12:01:50.504890  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.507577  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.507736  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508013  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.508041  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508176  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.508200  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508205  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.508335  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.508419  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.508499  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.508580  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.508691  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.508717  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.508830  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.585030  165698 ssh_runner.go:195] Run: systemctl --version
	I0617 12:01:50.612492  165698 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:01:50.765842  165698 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:01:50.773214  165698 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:01:50.773288  165698 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:01:50.793397  165698 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:01:50.793424  165698 start.go:494] detecting cgroup driver to use...
	I0617 12:01:50.793499  165698 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:01:50.811531  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:01:50.826223  165698 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:01:50.826289  165698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:01:50.840517  165698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:01:50.854788  165698 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:01:50.970328  165698 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:01:51.125815  165698 docker.go:233] disabling docker service ...
	I0617 12:01:51.125893  165698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:01:51.146368  165698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:01:51.161459  165698 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:01:51.346032  165698 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:01:51.503395  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:01:51.521021  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:01:51.543851  165698 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0617 12:01:51.543905  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.556230  165698 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:01:51.556309  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.573061  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.588663  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.601086  165698 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:01:51.617347  165698 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:01:51.634502  165698 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:01:51.634635  165698 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:01:51.652813  165698 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:01:51.665145  165698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:51.826713  165698 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:01:51.981094  165698 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:01:51.981186  165698 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:01:51.986026  165698 start.go:562] Will wait 60s for crictl version
	I0617 12:01:51.986091  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:51.990253  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:01:52.032543  165698 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:01:52.032631  165698 ssh_runner.go:195] Run: crio --version
	I0617 12:01:52.063904  165698 ssh_runner.go:195] Run: crio --version
	I0617 12:01:52.097158  165698 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0617 12:01:50.524130  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Start
	I0617 12:01:50.524321  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Ensuring networks are active...
	I0617 12:01:50.524939  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Ensuring network default is active
	I0617 12:01:50.525300  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Ensuring network mk-default-k8s-diff-port-991309 is active
	I0617 12:01:50.527342  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Getting domain xml...
	I0617 12:01:50.528126  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Creating domain...
	I0617 12:01:51.864887  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting to get IP...
	I0617 12:01:51.865835  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:51.866246  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:51.866328  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:51.866228  166802 retry.go:31] will retry after 200.163407ms: waiting for machine to come up
	I0617 12:01:52.067708  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.068164  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.068193  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:52.068119  166802 retry.go:31] will retry after 364.503903ms: waiting for machine to come up
	I0617 12:01:52.098675  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:52.102187  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:52.102572  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:52.102603  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:52.102823  165698 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0617 12:01:52.107573  165698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:52.121312  165698 kubeadm.go:877] updating cluster {Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.164 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:01:52.121448  165698 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0617 12:01:52.121515  165698 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:52.181796  165698 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0617 12:01:52.181891  165698 ssh_runner.go:195] Run: which lz4
	I0617 12:01:52.186827  165698 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:01:52.191806  165698 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:01:52.191875  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0617 12:01:50.116573  165060 node_ready.go:53] node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:52.122162  165060 node_ready.go:53] node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:53.117556  165060 node_ready.go:49] node "embed-certs-136195" has status "Ready":"True"
	I0617 12:01:53.117589  165060 node_ready.go:38] duration metric: took 7.004769746s for node "embed-certs-136195" to be "Ready" ...
	I0617 12:01:53.117598  165060 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:01:53.125606  165060 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:53.131618  165060 pod_ready.go:92] pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:53.131643  165060 pod_ready.go:81] duration metric: took 6.000929ms for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:53.131654  165060 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:52.434791  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.435584  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.435740  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:52.435665  166802 retry.go:31] will retry after 486.514518ms: waiting for machine to come up
	I0617 12:01:52.924190  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.924819  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.924845  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:52.924681  166802 retry.go:31] will retry after 520.971301ms: waiting for machine to come up
	I0617 12:01:53.447437  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:53.447965  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:53.447995  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:53.447919  166802 retry.go:31] will retry after 622.761044ms: waiting for machine to come up
	I0617 12:01:54.072700  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.073170  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.073202  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:54.073112  166802 retry.go:31] will retry after 671.940079ms: waiting for machine to come up
	I0617 12:01:54.746830  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.747342  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.747372  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:54.747310  166802 retry.go:31] will retry after 734.856022ms: waiting for machine to come up
	I0617 12:01:55.484571  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:55.485127  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:55.485157  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:55.485066  166802 retry.go:31] will retry after 1.198669701s: waiting for machine to come up
	I0617 12:01:56.685201  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:56.685468  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:56.685493  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:56.685440  166802 retry.go:31] will retry after 1.562509853s: waiting for machine to come up
	I0617 12:01:54.026903  165698 crio.go:462] duration metric: took 1.840117639s to copy over tarball
	I0617 12:01:54.027003  165698 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:01:57.049870  165698 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.022814584s)
	I0617 12:01:57.049904  165698 crio.go:469] duration metric: took 3.022967677s to extract the tarball
	I0617 12:01:57.049914  165698 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:01:57.094589  165698 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:57.133299  165698 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0617 12:01:57.133331  165698 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.133451  165698 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.133456  165698 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.133477  165698 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.133530  165698 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.133626  165698 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.135990  165698 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.135994  165698 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.135985  165698 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0617 12:01:57.136041  165698 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.136041  165698 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.289271  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.299061  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.322581  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.336462  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.337619  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.350335  165698 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0617 12:01:57.350395  165698 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.350448  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.357972  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0617 12:01:57.391517  165698 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0617 12:01:57.391563  165698 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.391640  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.419438  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.442111  165698 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0617 12:01:57.442154  165698 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.442200  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.450145  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.485873  165698 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0617 12:01:57.485922  165698 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0617 12:01:57.485942  165698 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.485957  165698 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.485996  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.486003  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.486053  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.490584  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.490669  165698 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0617 12:01:57.490714  165698 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0617 12:01:57.490755  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.551564  165698 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0617 12:01:57.551597  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.551619  165698 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.551662  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.660683  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0617 12:01:57.660732  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.660799  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0617 12:01:57.660856  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0617 12:01:57.660734  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.660903  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0617 12:01:57.660930  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.753965  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0617 12:01:57.753981  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0617 12:01:57.754069  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0617 12:01:57.754069  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0617 12:01:57.754146  165698 cache_images.go:92] duration metric: took 620.797178ms to LoadCachedImages
	W0617 12:01:57.754271  165698 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0617 12:01:57.754292  165698 kubeadm.go:928] updating node { 192.168.61.164 8443 v1.20.0 crio true true} ...
	I0617 12:01:57.754415  165698 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-003661 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:01:57.754489  165698 ssh_runner.go:195] Run: crio config
	I0617 12:01:57.807120  165698 cni.go:84] Creating CNI manager for ""
	I0617 12:01:57.807144  165698 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:57.807158  165698 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:01:57.807182  165698 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.164 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-003661 NodeName:old-k8s-version-003661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0617 12:01:57.807370  165698 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-003661"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:01:57.807437  165698 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0617 12:01:57.817865  165698 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:01:57.817940  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:01:57.829796  165698 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0617 12:01:57.847758  165698 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:01:57.866182  165698 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0617 12:01:57.884500  165698 ssh_runner.go:195] Run: grep 192.168.61.164	control-plane.minikube.internal$ /etc/hosts
	I0617 12:01:57.888852  165698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:57.902176  165698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:55.138418  165060 pod_ready.go:102] pod "etcd-embed-certs-136195" in "kube-system" namespace has status "Ready":"False"
	I0617 12:01:55.641014  165060 pod_ready.go:92] pod "etcd-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:55.641047  165060 pod_ready.go:81] duration metric: took 2.509383461s for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:55.641061  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.151759  165060 pod_ready.go:92] pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.151788  165060 pod_ready.go:81] duration metric: took 510.718192ms for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.152027  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.157234  165060 pod_ready.go:92] pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.157260  165060 pod_ready.go:81] duration metric: took 5.220069ms for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.157273  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.161767  165060 pod_ready.go:92] pod "kube-proxy-25d5n" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.161787  165060 pod_ready.go:81] duration metric: took 4.50732ms for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.161796  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.717763  165060 pod_ready.go:92] pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.717865  165060 pod_ready.go:81] duration metric: took 556.058292ms for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.717892  165060 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:58.249594  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:58.250033  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:58.250069  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:58.250019  166802 retry.go:31] will retry after 2.154567648s: waiting for machine to come up
	I0617 12:02:00.406269  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:00.406668  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:02:00.406702  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:02:00.406615  166802 retry.go:31] will retry after 2.065044206s: waiting for machine to come up
	I0617 12:01:58.049361  165698 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:58.067893  165698 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661 for IP: 192.168.61.164
	I0617 12:01:58.067924  165698 certs.go:194] generating shared ca certs ...
	I0617 12:01:58.067945  165698 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:58.068162  165698 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:01:58.068221  165698 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:01:58.068236  165698 certs.go:256] generating profile certs ...
	I0617 12:01:58.068352  165698 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.key
	I0617 12:01:58.068438  165698 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key.6c1f259c
	I0617 12:01:58.068493  165698 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.key
	I0617 12:01:58.068647  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:01:58.068690  165698 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:01:58.068704  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:01:58.068743  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:01:58.068790  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:01:58.068824  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:01:58.068877  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:58.069548  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:01:58.109048  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:01:58.134825  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:01:58.159910  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:01:58.191108  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0617 12:01:58.217407  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:01:58.242626  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:01:58.267261  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0617 12:01:58.291562  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:01:58.321848  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:01:58.352361  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:01:58.379343  165698 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:01:58.399146  165698 ssh_runner.go:195] Run: openssl version
	I0617 12:01:58.405081  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:01:58.415471  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.420046  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.420099  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.425886  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:01:58.436575  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:01:58.447166  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.451523  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.451582  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.457670  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:01:58.468667  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:01:58.479095  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.483744  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.483796  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.489520  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:01:58.500298  165698 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:01:58.504859  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:01:58.510619  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:01:58.516819  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:01:58.522837  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:01:58.528736  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:01:58.534585  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:01:58.540464  165698 kubeadm.go:391] StartCluster: {Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.164 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:01:58.540549  165698 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:01:58.540624  165698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:58.583638  165698 cri.go:89] found id: ""
	I0617 12:01:58.583724  165698 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:01:58.594266  165698 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:01:58.594290  165698 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:01:58.594295  165698 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:01:58.594354  165698 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:01:58.604415  165698 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:01:58.605367  165698 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-003661" does not appear in /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:01:58.605949  165698 kubeconfig.go:62] /home/jenkins/minikube-integration/19084-112967/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-003661" cluster setting kubeconfig missing "old-k8s-version-003661" context setting]
	I0617 12:01:58.606833  165698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:58.662621  165698 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:01:58.673813  165698 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.164
	I0617 12:01:58.673848  165698 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:01:58.673863  165698 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:01:58.673907  165698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:58.712607  165698 cri.go:89] found id: ""
	I0617 12:01:58.712703  165698 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:01:58.731676  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:01:58.741645  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:01:58.741666  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:01:58.741709  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:01:58.750871  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:01:58.750931  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:01:58.760545  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:01:58.769701  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:01:58.769776  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:01:58.779348  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:01:58.788507  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:01:58.788566  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:01:58.799220  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:01:58.808403  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:01:58.808468  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:01:58.818169  165698 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:01:58.828079  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:58.962164  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:59.679319  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:59.903216  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:00.026243  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:00.126201  165698 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:00.126314  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:00.627227  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:01.126836  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:01.626524  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:02.126619  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:02.626434  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:58.727229  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:01.226021  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:02.473035  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:02.473477  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:02:02.473505  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:02:02.473458  166802 retry.go:31] will retry after 3.132988331s: waiting for machine to come up
	I0617 12:02:05.607981  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:05.608354  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:02:05.608391  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:02:05.608310  166802 retry.go:31] will retry after 3.312972752s: waiting for machine to come up
	I0617 12:02:03.126687  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:03.626469  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:04.126347  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:04.626548  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:05.127142  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:05.626937  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:06.126479  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:06.626466  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:07.126806  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:07.626814  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:03.724216  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:06.224335  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:08.224842  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:10.217135  164809 start.go:364] duration metric: took 54.298812889s to acquireMachinesLock for "no-preload-152830"
	I0617 12:02:10.217192  164809 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:02:10.217204  164809 fix.go:54] fixHost starting: 
	I0617 12:02:10.217633  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:10.217673  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:10.238636  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44149
	I0617 12:02:10.239091  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:10.239596  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:02:10.239622  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:10.239997  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:10.240214  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:10.240397  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:02:10.242141  164809 fix.go:112] recreateIfNeeded on no-preload-152830: state=Stopped err=<nil>
	I0617 12:02:10.242162  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	W0617 12:02:10.242324  164809 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:02:10.244888  164809 out.go:177] * Restarting existing kvm2 VM for "no-preload-152830" ...
	I0617 12:02:08.922547  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.922966  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Found IP for machine: 192.168.50.125
	I0617 12:02:08.922996  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Reserving static IP address...
	I0617 12:02:08.923013  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has current primary IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.923437  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-991309", mac: "52:54:00:4e:6e:f5", ip: "192.168.50.125"} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:08.923484  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Reserved static IP address: 192.168.50.125
	I0617 12:02:08.923514  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | skip adding static IP to network mk-default-k8s-diff-port-991309 - found existing host DHCP lease matching {name: "default-k8s-diff-port-991309", mac: "52:54:00:4e:6e:f5", ip: "192.168.50.125"}
	I0617 12:02:08.923533  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Getting to WaitForSSH function...
	I0617 12:02:08.923550  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for SSH to be available...
	I0617 12:02:08.925667  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.926017  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:08.926050  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.926203  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Using SSH client type: external
	I0617 12:02:08.926228  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa (-rw-------)
	I0617 12:02:08.926269  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:02:08.926290  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | About to run SSH command:
	I0617 12:02:08.926316  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | exit 0
	I0617 12:02:09.051973  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | SSH cmd err, output: <nil>: 
	I0617 12:02:09.052329  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetConfigRaw
	I0617 12:02:09.052946  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:09.055156  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.055509  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.055541  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.055748  166103 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/config.json ...
	I0617 12:02:09.055940  166103 machine.go:94] provisionDockerMachine start ...
	I0617 12:02:09.055960  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:09.056162  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.058451  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.058826  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.058860  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.058961  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.059155  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.059289  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.059440  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.059583  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.059796  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.059813  166103 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:02:09.163974  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:02:09.164020  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetMachineName
	I0617 12:02:09.164281  166103 buildroot.go:166] provisioning hostname "default-k8s-diff-port-991309"
	I0617 12:02:09.164312  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetMachineName
	I0617 12:02:09.164499  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.167194  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.167606  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.167632  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.167856  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.168097  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.168285  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.168414  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.168571  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.168795  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.168811  166103 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-991309 && echo "default-k8s-diff-port-991309" | sudo tee /etc/hostname
	I0617 12:02:09.290435  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-991309
	
	I0617 12:02:09.290470  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.293538  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.293879  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.293902  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.294132  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.294361  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.294574  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.294753  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.294943  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.295188  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.295209  166103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-991309' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-991309/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-991309' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:02:09.408702  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:02:09.408736  166103 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:02:09.408777  166103 buildroot.go:174] setting up certificates
	I0617 12:02:09.408789  166103 provision.go:84] configureAuth start
	I0617 12:02:09.408798  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetMachineName
	I0617 12:02:09.409122  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:09.411936  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.412304  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.412335  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.412522  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.414598  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.414914  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.414942  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.415054  166103 provision.go:143] copyHostCerts
	I0617 12:02:09.415121  166103 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:02:09.415132  166103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:02:09.415182  166103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:02:09.415264  166103 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:02:09.415271  166103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:02:09.415290  166103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:02:09.415344  166103 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:02:09.415353  166103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:02:09.415378  166103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:02:09.415439  166103 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-991309 san=[127.0.0.1 192.168.50.125 default-k8s-diff-port-991309 localhost minikube]
	I0617 12:02:09.534010  166103 provision.go:177] copyRemoteCerts
	I0617 12:02:09.534082  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:02:09.534121  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.536707  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.537143  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.537176  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.537352  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.537516  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.537687  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.537840  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:09.622292  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0617 12:02:09.652653  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:02:09.676801  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:02:09.700701  166103 provision.go:87] duration metric: took 291.898478ms to configureAuth
	I0617 12:02:09.700734  166103 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:02:09.700931  166103 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:02:09.701023  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.703710  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.704138  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.704171  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.704330  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.704537  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.704730  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.704895  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.705058  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.705243  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.705262  166103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:02:09.974077  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:02:09.974109  166103 machine.go:97] duration metric: took 918.156221ms to provisionDockerMachine
	I0617 12:02:09.974120  166103 start.go:293] postStartSetup for "default-k8s-diff-port-991309" (driver="kvm2")
	I0617 12:02:09.974131  166103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:02:09.974155  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:09.974502  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:02:09.974544  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.977677  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.978073  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.978097  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.978225  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.978407  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.978583  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.978734  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:10.067068  166103 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:02:10.071843  166103 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:02:10.071870  166103 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:02:10.071934  166103 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:02:10.072024  166103 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:02:10.072128  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:02:10.082041  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:10.107855  166103 start.go:296] duration metric: took 133.717924ms for postStartSetup
	I0617 12:02:10.107903  166103 fix.go:56] duration metric: took 19.607369349s for fixHost
	I0617 12:02:10.107932  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:10.110742  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.111135  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.111169  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.111294  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:10.111527  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.111674  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.111861  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:10.111980  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:10.112205  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:10.112220  166103 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:02:10.216945  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625730.186446687
	
	I0617 12:02:10.216973  166103 fix.go:216] guest clock: 1718625730.186446687
	I0617 12:02:10.216983  166103 fix.go:229] Guest: 2024-06-17 12:02:10.186446687 +0000 UTC Remote: 2024-06-17 12:02:10.107909348 +0000 UTC m=+152.716337101 (delta=78.537339ms)
	I0617 12:02:10.217033  166103 fix.go:200] guest clock delta is within tolerance: 78.537339ms
	I0617 12:02:10.217039  166103 start.go:83] releasing machines lock for "default-k8s-diff-port-991309", held for 19.716554323s
	I0617 12:02:10.217073  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.217363  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:10.220429  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.220897  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.220927  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.221083  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.221655  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.221870  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.221965  166103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:02:10.222026  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:10.222094  166103 ssh_runner.go:195] Run: cat /version.json
	I0617 12:02:10.222122  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:10.225337  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.225673  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.225710  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.225730  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.226015  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:10.226172  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.226202  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.226242  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.226363  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:10.226447  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:10.226508  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.226591  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:10.226687  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:10.226840  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:10.334316  166103 ssh_runner.go:195] Run: systemctl --version
	I0617 12:02:10.340584  166103 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:02:10.489359  166103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:02:10.497198  166103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:02:10.497267  166103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:02:10.517001  166103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:02:10.517032  166103 start.go:494] detecting cgroup driver to use...
	I0617 12:02:10.517110  166103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:02:10.536520  166103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:02:10.550478  166103 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:02:10.550542  166103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:02:10.564437  166103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:02:10.578554  166103 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:02:10.710346  166103 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:02:10.891637  166103 docker.go:233] disabling docker service ...
	I0617 12:02:10.891694  166103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:02:10.908300  166103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:02:10.921663  166103 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:02:11.062715  166103 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:02:11.201061  166103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:02:11.216120  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:02:11.237213  166103 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:02:11.237286  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.248171  166103 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:02:11.248238  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.259159  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.270217  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.280841  166103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:02:11.291717  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.302084  166103 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.319559  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.331992  166103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:02:11.342435  166103 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:02:11.342494  166103 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:02:11.357436  166103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:02:11.367406  166103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:11.493416  166103 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:02:11.629980  166103 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:02:11.630055  166103 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:02:11.636456  166103 start.go:562] Will wait 60s for crictl version
	I0617 12:02:11.636540  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:02:11.642817  166103 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:02:11.681563  166103 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:02:11.681655  166103 ssh_runner.go:195] Run: crio --version
	I0617 12:02:11.712576  166103 ssh_runner.go:195] Run: crio --version
	I0617 12:02:11.753826  166103 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:02:11.755256  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:11.758628  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:11.759006  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:11.759041  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:11.759252  166103 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0617 12:02:11.763743  166103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:11.780286  166103 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:02:11.780455  166103 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:02:11.780528  166103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:02:11.819396  166103 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:02:11.819481  166103 ssh_runner.go:195] Run: which lz4
	I0617 12:02:11.824047  166103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:02:11.828770  166103 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:02:11.828807  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0617 12:02:08.127233  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:08.626498  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:09.126712  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:09.627284  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.126446  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.627249  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:11.126428  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:11.626638  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:12.127091  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:12.627361  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.226209  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:12.227824  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:10.246388  164809 main.go:141] libmachine: (no-preload-152830) Calling .Start
	I0617 12:02:10.246608  164809 main.go:141] libmachine: (no-preload-152830) Ensuring networks are active...
	I0617 12:02:10.247397  164809 main.go:141] libmachine: (no-preload-152830) Ensuring network default is active
	I0617 12:02:10.247789  164809 main.go:141] libmachine: (no-preload-152830) Ensuring network mk-no-preload-152830 is active
	I0617 12:02:10.248192  164809 main.go:141] libmachine: (no-preload-152830) Getting domain xml...
	I0617 12:02:10.248869  164809 main.go:141] libmachine: (no-preload-152830) Creating domain...
	I0617 12:02:11.500721  164809 main.go:141] libmachine: (no-preload-152830) Waiting to get IP...
	I0617 12:02:11.501614  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:11.502169  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:11.502254  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:11.502131  166976 retry.go:31] will retry after 281.343691ms: waiting for machine to come up
	I0617 12:02:11.785597  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:11.786047  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:11.786082  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:11.785983  166976 retry.go:31] will retry after 303.221815ms: waiting for machine to come up
	I0617 12:02:12.090367  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:12.090919  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:12.090945  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:12.090826  166976 retry.go:31] will retry after 422.250116ms: waiting for machine to come up
	I0617 12:02:12.514456  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:12.515026  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:12.515055  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:12.515001  166976 retry.go:31] will retry after 513.394077ms: waiting for machine to come up
	I0617 12:02:13.029811  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:13.030495  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:13.030522  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:13.030449  166976 retry.go:31] will retry after 596.775921ms: waiting for machine to come up
	I0617 12:02:13.387031  166103 crio.go:462] duration metric: took 1.563017054s to copy over tarball
	I0617 12:02:13.387108  166103 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:02:15.664139  166103 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.276994761s)
	I0617 12:02:15.664177  166103 crio.go:469] duration metric: took 2.277117031s to extract the tarball
	I0617 12:02:15.664188  166103 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:02:15.703690  166103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:02:15.757605  166103 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 12:02:15.757634  166103 cache_images.go:84] Images are preloaded, skipping loading
	I0617 12:02:15.757644  166103 kubeadm.go:928] updating node { 192.168.50.125 8444 v1.30.1 crio true true} ...
	I0617 12:02:15.757784  166103 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-991309 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:02:15.757874  166103 ssh_runner.go:195] Run: crio config
	I0617 12:02:15.808350  166103 cni.go:84] Creating CNI manager for ""
	I0617 12:02:15.808380  166103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:15.808397  166103 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:02:15.808434  166103 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.125 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-991309 NodeName:default-k8s-diff-port-991309 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:02:15.808633  166103 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.125
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-991309"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:02:15.808709  166103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:02:15.818891  166103 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:02:15.818964  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:02:15.828584  166103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0617 12:02:15.846044  166103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:02:15.862572  166103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0617 12:02:15.880042  166103 ssh_runner.go:195] Run: grep 192.168.50.125	control-plane.minikube.internal$ /etc/hosts
	I0617 12:02:15.884470  166103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:15.897031  166103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:16.013826  166103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:02:16.030366  166103 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309 for IP: 192.168.50.125
	I0617 12:02:16.030391  166103 certs.go:194] generating shared ca certs ...
	I0617 12:02:16.030408  166103 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:16.030590  166103 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:02:16.030650  166103 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:02:16.030668  166103 certs.go:256] generating profile certs ...
	I0617 12:02:16.030793  166103 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/client.key
	I0617 12:02:16.030876  166103 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/apiserver.key.02769a34
	I0617 12:02:16.030919  166103 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/proxy-client.key
	I0617 12:02:16.031024  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:02:16.031051  166103 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:02:16.031060  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:02:16.031080  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:02:16.031103  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:02:16.031122  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:02:16.031179  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:16.031991  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:02:16.066789  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:02:16.094522  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:02:16.119693  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:02:16.155810  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0617 12:02:16.186788  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:02:16.221221  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:02:16.248948  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:02:16.273404  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:02:16.296958  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:02:16.320047  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:02:16.349598  166103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:02:16.367499  166103 ssh_runner.go:195] Run: openssl version
	I0617 12:02:16.373596  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:02:16.384778  166103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:02:16.389521  166103 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:02:16.389574  166103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:02:16.395523  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:02:16.406357  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:02:16.417139  166103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:02:16.421629  166103 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:02:16.421679  166103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:02:16.427323  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:02:16.438649  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:02:16.450042  166103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:16.454587  166103 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:16.454636  166103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:16.460677  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:02:16.472886  166103 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:02:16.477630  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:02:16.483844  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:02:16.490123  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:02:16.497606  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:02:16.504066  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:02:16.510597  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:02:16.518270  166103 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:02:16.518371  166103 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:02:16.518439  166103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:16.569103  166103 cri.go:89] found id: ""
	I0617 12:02:16.569179  166103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:02:16.580328  166103 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:02:16.580353  166103 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:02:16.580360  166103 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:02:16.580409  166103 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:02:16.591277  166103 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:02:16.592450  166103 kubeconfig.go:125] found "default-k8s-diff-port-991309" server: "https://192.168.50.125:8444"
	I0617 12:02:16.594770  166103 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:02:16.605669  166103 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.125
	I0617 12:02:16.605728  166103 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:02:16.605745  166103 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:02:16.605810  166103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:16.654529  166103 cri.go:89] found id: ""
	I0617 12:02:16.654620  166103 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:02:16.672923  166103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:02:16.683485  166103 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:02:16.683514  166103 kubeadm.go:156] found existing configuration files:
	
	I0617 12:02:16.683576  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0617 12:02:16.693533  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:02:16.693614  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:02:16.703670  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0617 12:02:16.716352  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:02:16.716413  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:02:16.729336  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0617 12:02:16.739183  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:02:16.739249  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:02:16.748978  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0617 12:02:16.758195  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:02:16.758262  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:02:16.767945  166103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:02:16.777773  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:16.919605  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:13.126836  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:13.626460  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.127261  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.627161  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:15.126580  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:15.627082  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:16.127163  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:16.626524  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:17.126469  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:17.626488  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.728717  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:17.225452  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:13.629097  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:13.629723  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:13.629826  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:13.629705  166976 retry.go:31] will retry after 588.18471ms: waiting for machine to come up
	I0617 12:02:14.219111  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:14.219672  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:14.219704  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:14.219611  166976 retry.go:31] will retry after 889.359727ms: waiting for machine to come up
	I0617 12:02:15.110916  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:15.111528  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:15.111559  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:15.111473  166976 retry.go:31] will retry after 1.139454059s: waiting for machine to come up
	I0617 12:02:16.252051  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:16.252601  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:16.252636  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:16.252534  166976 retry.go:31] will retry after 1.189357648s: waiting for machine to come up
	I0617 12:02:17.443845  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:17.444370  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:17.444403  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:17.444310  166976 retry.go:31] will retry after 1.614769478s: waiting for machine to come up
	I0617 12:02:18.068811  166103 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.149162388s)
	I0617 12:02:18.068870  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:18.301209  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:18.362153  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:18.454577  166103 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:18.454674  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:18.954929  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.454795  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.505453  166103 api_server.go:72] duration metric: took 1.050874914s to wait for apiserver process to appear ...
	I0617 12:02:19.505490  166103 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:02:19.505518  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:19.506056  166103 api_server.go:269] stopped: https://192.168.50.125:8444/healthz: Get "https://192.168.50.125:8444/healthz": dial tcp 192.168.50.125:8444: connect: connection refused
	I0617 12:02:20.005681  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:22.216162  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:22.216214  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:22.216234  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:22.239561  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:22.239635  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:18.126897  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:18.627145  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.126724  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.626498  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:20.126389  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:20.627190  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:21.126480  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:21.627210  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:22.127273  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:22.626691  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.227344  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:21.725689  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:19.061035  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:19.061555  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:19.061588  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:19.061520  166976 retry.go:31] will retry after 2.385838312s: waiting for machine to come up
	I0617 12:02:21.448745  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:21.449239  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:21.449266  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:21.449208  166976 retry.go:31] will retry after 3.308788046s: waiting for machine to come up
	I0617 12:02:22.505636  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:22.509888  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:22.509916  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:23.006285  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:23.011948  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:23.011983  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:23.505640  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:23.510358  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 200:
	ok
	I0617 12:02:23.516663  166103 api_server.go:141] control plane version: v1.30.1
	I0617 12:02:23.516686  166103 api_server.go:131] duration metric: took 4.011188976s to wait for apiserver health ...
	I0617 12:02:23.516694  166103 cni.go:84] Creating CNI manager for ""
	I0617 12:02:23.516700  166103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:23.518498  166103 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:02:23.519722  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:02:23.530145  166103 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:02:23.552805  166103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:02:23.564825  166103 system_pods.go:59] 8 kube-system pods found
	I0617 12:02:23.564853  166103 system_pods.go:61] "coredns-7db6d8ff4d-mnw24" [1e6c4ff3-f0dc-43da-abd8-baaed7dca40c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0617 12:02:23.564863  166103 system_pods.go:61] "etcd-default-k8s-diff-port-991309" [820a4f27-cf83-4edb-a2ea-edba6673d851] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0617 12:02:23.564871  166103 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991309" [26e6c19d-6f70-4924-83f5-563c8508c9e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 12:02:23.564877  166103 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991309" [01e7c468-98a6-48f3-a158-59e97fa8279c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 12:02:23.564885  166103 system_pods.go:61] "kube-proxy-jn5kp" [d6935148-7ee8-4655-8327-9f1ee4c933de] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0617 12:02:23.564894  166103 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991309" [53ecd22c-05cf-48a5-b7e5-925392085f7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 12:02:23.564899  166103 system_pods.go:61] "metrics-server-569cc877fc-n2svp" [5b637d97-3183-4324-98cf-dd69a2968578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:02:23.564908  166103 system_pods.go:61] "storage-provisioner" [92b20aec-29c2-4256-86be-7f58f66585dd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 12:02:23.564913  166103 system_pods.go:74] duration metric: took 12.089276ms to wait for pod list to return data ...
	I0617 12:02:23.564919  166103 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:02:23.573455  166103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:02:23.573480  166103 node_conditions.go:123] node cpu capacity is 2
	I0617 12:02:23.573492  166103 node_conditions.go:105] duration metric: took 8.568721ms to run NodePressure ...
	I0617 12:02:23.573509  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:23.918292  166103 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 12:02:23.922992  166103 kubeadm.go:733] kubelet initialised
	I0617 12:02:23.923019  166103 kubeadm.go:734] duration metric: took 4.69627ms waiting for restarted kubelet to initialise ...
	I0617 12:02:23.923027  166103 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:23.927615  166103 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.932203  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.932225  166103 pod_ready.go:81] duration metric: took 4.590359ms for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.932233  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.932239  166103 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.936802  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.936825  166103 pod_ready.go:81] duration metric: took 4.579036ms for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.936835  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.936840  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.942877  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.942903  166103 pod_ready.go:81] duration metric: took 6.055748ms for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.942927  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.942935  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.955830  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.955851  166103 pod_ready.go:81] duration metric: took 12.903911ms for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.955861  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.955869  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:24.356654  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-proxy-jn5kp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.356682  166103 pod_ready.go:81] duration metric: took 400.805294ms for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:24.356692  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-proxy-jn5kp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.356699  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:24.765108  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.765133  166103 pod_ready.go:81] duration metric: took 408.42568ms for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:24.765145  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.765152  166103 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:25.156898  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:25.156927  166103 pod_ready.go:81] duration metric: took 391.769275ms for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:25.156939  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:25.156946  166103 pod_ready.go:38] duration metric: took 1.233911476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:25.156968  166103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:02:25.170925  166103 ops.go:34] apiserver oom_adj: -16
	I0617 12:02:25.170963  166103 kubeadm.go:591] duration metric: took 8.590593327s to restartPrimaryControlPlane
	I0617 12:02:25.170976  166103 kubeadm.go:393] duration metric: took 8.652716269s to StartCluster
	I0617 12:02:25.170998  166103 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:25.171111  166103 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:02:25.173919  166103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:25.174286  166103 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:02:25.176186  166103 out.go:177] * Verifying Kubernetes components...
	I0617 12:02:25.174347  166103 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:02:25.174528  166103 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:02:25.177622  166103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:25.177632  166103 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-991309"
	I0617 12:02:25.177670  166103 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-991309"
	W0617 12:02:25.177684  166103 addons.go:243] addon metrics-server should already be in state true
	I0617 12:02:25.177721  166103 host.go:66] Checking if "default-k8s-diff-port-991309" exists ...
	I0617 12:02:25.177622  166103 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-991309"
	I0617 12:02:25.177789  166103 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-991309"
	W0617 12:02:25.177806  166103 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:02:25.177837  166103 host.go:66] Checking if "default-k8s-diff-port-991309" exists ...
	I0617 12:02:25.177628  166103 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-991309"
	I0617 12:02:25.177875  166103 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-991309"
	I0617 12:02:25.178173  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.178202  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.178251  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.178282  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.178299  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.178318  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.198817  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0617 12:02:25.199064  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0617 12:02:25.199513  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I0617 12:02:25.199902  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.199919  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.200633  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.201080  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.201110  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.201270  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.201286  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.201415  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.201427  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.201482  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.201786  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.201845  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.202268  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.202637  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.202663  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.202989  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.203038  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.206439  166103 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-991309"
	W0617 12:02:25.206462  166103 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:02:25.206492  166103 host.go:66] Checking if "default-k8s-diff-port-991309" exists ...
	I0617 12:02:25.206875  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.206921  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.218501  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37189
	I0617 12:02:25.218532  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34089
	I0617 12:02:25.218912  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.218986  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.219410  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.219429  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.219545  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.219561  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.219917  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.219920  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.220110  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.220111  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.221839  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:25.223920  166103 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:02:25.225213  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:02:25.225232  166103 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:02:25.225260  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:25.224029  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:25.228780  166103 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:25.227545  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0617 12:02:25.230084  166103 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:02:25.230100  166103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:02:25.230113  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:25.228465  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.229054  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:25.230179  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:25.229303  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.230215  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.230371  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:25.230542  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:25.230674  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:25.230723  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.230737  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.231150  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.231772  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.231802  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.234036  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.234476  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:25.234494  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.234755  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:25.234919  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:25.235079  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:25.235235  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:25.248352  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46349
	I0617 12:02:25.248851  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.249306  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.249330  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.249681  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.249873  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.251282  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:25.251512  166103 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:02:25.251529  166103 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:02:25.251551  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:25.253963  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.254458  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:25.254484  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.254628  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:25.254941  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:25.255229  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:25.255385  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:25.391207  166103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:02:25.411906  166103 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-991309" to be "Ready" ...
	I0617 12:02:25.476025  166103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:02:25.566470  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:02:25.566500  166103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:02:25.593744  166103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:02:25.620336  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:02:25.620371  166103 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:02:25.700009  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:02:25.700048  166103 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:02:25.769841  166103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:02:25.782207  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:25.782240  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:25.782576  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Closing plugin on server side
	I0617 12:02:25.782597  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:25.782610  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:25.782623  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:25.782632  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:25.782888  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:25.782916  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:25.789639  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:25.789662  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:25.789921  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:25.789941  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.600819  166103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.007014283s)
	I0617 12:02:26.600883  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.600898  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.600902  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.600917  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.601253  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601295  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.601305  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.601325  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601342  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.601353  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.601366  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.601370  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.601571  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Closing plugin on server side
	I0617 12:02:26.601590  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601600  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.601615  166103 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-991309"
	I0617 12:02:26.601626  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601635  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Closing plugin on server side
	I0617 12:02:26.601638  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.604200  166103 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0617 12:02:26.605477  166103 addons.go:510] duration metric: took 1.431148263s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0617 12:02:27.415122  166103 node_ready.go:53] node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.126888  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:23.627274  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.127019  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.627337  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:25.126642  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:25.627064  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:26.126606  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:26.626803  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:27.126825  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:27.626799  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.223344  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:26.225129  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:24.760577  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:24.761063  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:24.761095  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:24.760999  166976 retry.go:31] will retry after 3.793168135s: waiting for machine to come up
	I0617 12:02:28.558153  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.558708  164809 main.go:141] libmachine: (no-preload-152830) Found IP for machine: 192.168.39.173
	I0617 12:02:28.558735  164809 main.go:141] libmachine: (no-preload-152830) Reserving static IP address...
	I0617 12:02:28.558751  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has current primary IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.559214  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "no-preload-152830", mac: "52:54:00:c0:1a:fb", ip: "192.168.39.173"} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.559248  164809 main.go:141] libmachine: (no-preload-152830) DBG | skip adding static IP to network mk-no-preload-152830 - found existing host DHCP lease matching {name: "no-preload-152830", mac: "52:54:00:c0:1a:fb", ip: "192.168.39.173"}
	I0617 12:02:28.559263  164809 main.go:141] libmachine: (no-preload-152830) Reserved static IP address: 192.168.39.173
	I0617 12:02:28.559278  164809 main.go:141] libmachine: (no-preload-152830) Waiting for SSH to be available...
	I0617 12:02:28.559295  164809 main.go:141] libmachine: (no-preload-152830) DBG | Getting to WaitForSSH function...
	I0617 12:02:28.562122  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.562453  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.562482  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.562678  164809 main.go:141] libmachine: (no-preload-152830) DBG | Using SSH client type: external
	I0617 12:02:28.562706  164809 main.go:141] libmachine: (no-preload-152830) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa (-rw-------)
	I0617 12:02:28.562739  164809 main.go:141] libmachine: (no-preload-152830) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.173 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:02:28.562753  164809 main.go:141] libmachine: (no-preload-152830) DBG | About to run SSH command:
	I0617 12:02:28.562770  164809 main.go:141] libmachine: (no-preload-152830) DBG | exit 0
	I0617 12:02:28.687683  164809 main.go:141] libmachine: (no-preload-152830) DBG | SSH cmd err, output: <nil>: 
	I0617 12:02:28.688021  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetConfigRaw
	I0617 12:02:28.688649  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:28.691248  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.691585  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.691609  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.691895  164809 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/config.json ...
	I0617 12:02:28.692109  164809 machine.go:94] provisionDockerMachine start ...
	I0617 12:02:28.692132  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:28.692371  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:28.694371  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.694738  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.694766  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.694942  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:28.695130  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.695309  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.695490  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:28.695695  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:28.695858  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:28.695869  164809 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:02:28.803687  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:02:28.803726  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:02:28.803996  164809 buildroot.go:166] provisioning hostname "no-preload-152830"
	I0617 12:02:28.804031  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:02:28.804333  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:28.806959  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.807395  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.807424  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.807547  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:28.807725  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.807895  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.808057  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:28.808216  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:28.808420  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:28.808436  164809 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-152830 && echo "no-preload-152830" | sudo tee /etc/hostname
	I0617 12:02:28.931222  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-152830
	
	I0617 12:02:28.931259  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:28.934188  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.934536  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.934564  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.934822  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:28.935048  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.935218  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.935353  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:28.935593  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:28.935814  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:28.935837  164809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-152830' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-152830/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-152830' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:02:29.054126  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:02:29.054156  164809 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:02:29.054173  164809 buildroot.go:174] setting up certificates
	I0617 12:02:29.054184  164809 provision.go:84] configureAuth start
	I0617 12:02:29.054195  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:02:29.054490  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:29.057394  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.057797  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.057830  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.057963  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.060191  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.060485  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.060514  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.060633  164809 provision.go:143] copyHostCerts
	I0617 12:02:29.060708  164809 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:02:29.060722  164809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:02:29.060796  164809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:02:29.060963  164809 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:02:29.060978  164809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:02:29.061003  164809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:02:29.061065  164809 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:02:29.061072  164809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:02:29.061090  164809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:02:29.061139  164809 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.no-preload-152830 san=[127.0.0.1 192.168.39.173 localhost minikube no-preload-152830]
	I0617 12:02:29.321179  164809 provision.go:177] copyRemoteCerts
	I0617 12:02:29.321232  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:02:29.321256  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.324217  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.324612  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.324642  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.324836  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.325043  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.325227  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.325386  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:29.410247  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:02:29.435763  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0617 12:02:29.462900  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:02:29.491078  164809 provision.go:87] duration metric: took 436.876068ms to configureAuth
	I0617 12:02:29.491120  164809 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:02:29.491377  164809 config.go:182] Loaded profile config "no-preload-152830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:02:29.491522  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.494581  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.495019  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.495052  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.495245  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.495555  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.495766  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.495897  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.496068  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:29.496275  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:29.496296  164809 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:02:29.774692  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:02:29.774730  164809 machine.go:97] duration metric: took 1.082604724s to provisionDockerMachine
	I0617 12:02:29.774748  164809 start.go:293] postStartSetup for "no-preload-152830" (driver="kvm2")
	I0617 12:02:29.774765  164809 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:02:29.774785  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:29.775181  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:02:29.775220  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.778574  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.778959  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.778988  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.779154  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.779351  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.779575  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.779750  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:29.866959  164809 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:02:29.871319  164809 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:02:29.871348  164809 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:02:29.871425  164809 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:02:29.871535  164809 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:02:29.871648  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:02:29.881995  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:29.907614  164809 start.go:296] duration metric: took 132.84708ms for postStartSetup
	I0617 12:02:29.907669  164809 fix.go:56] duration metric: took 19.690465972s for fixHost
	I0617 12:02:29.907695  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.910226  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.910617  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.910644  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.910811  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.911162  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.911377  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.911571  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.911772  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:29.911961  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:29.911972  164809 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:02:30.021051  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625749.993041026
	
	I0617 12:02:30.021079  164809 fix.go:216] guest clock: 1718625749.993041026
	I0617 12:02:30.021088  164809 fix.go:229] Guest: 2024-06-17 12:02:29.993041026 +0000 UTC Remote: 2024-06-17 12:02:29.907674102 +0000 UTC m=+356.579226401 (delta=85.366924ms)
	I0617 12:02:30.021113  164809 fix.go:200] guest clock delta is within tolerance: 85.366924ms
	I0617 12:02:30.021120  164809 start.go:83] releasing machines lock for "no-preload-152830", held for 19.803953246s
	I0617 12:02:30.021148  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.021403  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:30.024093  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.024600  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:30.024633  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.024830  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.025380  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.025552  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.025623  164809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:02:30.025668  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:30.025767  164809 ssh_runner.go:195] Run: cat /version.json
	I0617 12:02:30.025798  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:30.028656  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.028826  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.029037  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:30.029068  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.029294  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:30.029336  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:30.029366  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.029528  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:30.029536  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:30.029764  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:30.029776  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:30.029957  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:30.029984  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:30.030161  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:30.135901  164809 ssh_runner.go:195] Run: systemctl --version
	I0617 12:02:30.142668  164809 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:02:30.296485  164809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:02:30.302789  164809 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:02:30.302856  164809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:02:30.319775  164809 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:02:30.319793  164809 start.go:494] detecting cgroup driver to use...
	I0617 12:02:30.319894  164809 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:02:30.335498  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:02:30.349389  164809 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:02:30.349427  164809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:02:30.363086  164809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:02:30.377383  164809 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:02:30.499956  164809 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:02:30.644098  164809 docker.go:233] disabling docker service ...
	I0617 12:02:30.644178  164809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:02:30.661490  164809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:02:30.675856  164809 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:02:30.819937  164809 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:02:30.932926  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:02:30.947638  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:02:30.966574  164809 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:02:30.966648  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:30.978339  164809 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:02:30.978416  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:30.989950  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.000644  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.011280  164809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:02:31.022197  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.032780  164809 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.050053  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.062065  164809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:02:31.073296  164809 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:02:31.073368  164809 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:02:31.087733  164809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:02:31.098019  164809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:31.232495  164809 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:02:31.371236  164809 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:02:31.371312  164809 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:02:31.376442  164809 start.go:562] Will wait 60s for crictl version
	I0617 12:02:31.376522  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.380416  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:02:31.426664  164809 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:02:31.426763  164809 ssh_runner.go:195] Run: crio --version
	I0617 12:02:31.456696  164809 ssh_runner.go:195] Run: crio --version
	I0617 12:02:31.487696  164809 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:02:29.416369  166103 node_ready.go:53] node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:31.417357  166103 node_ready.go:53] node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:28.126854  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:28.627278  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:29.126577  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:29.626475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:30.127193  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:30.627229  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:31.126478  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:31.626336  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:32.126398  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:32.627005  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:28.724801  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:30.726589  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:33.225707  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:31.488972  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:31.491812  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:31.492191  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:31.492220  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:31.492411  164809 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 12:02:31.497100  164809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:31.510949  164809 kubeadm.go:877] updating cluster {Name:no-preload-152830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-152830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:02:31.511079  164809 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:02:31.511114  164809 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:02:31.546350  164809 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:02:31.546377  164809 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 12:02:31.546440  164809 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.546452  164809 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.546478  164809 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.546485  164809 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.546513  164809 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.546513  164809 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.546458  164809 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.546569  164809 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0617 12:02:31.548101  164809 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.548123  164809 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.548123  164809 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.548137  164809 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.548101  164809 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.548104  164809 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0617 12:02:31.548103  164809 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.548427  164809 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.714107  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.714819  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0617 12:02:31.715764  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.721844  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.722172  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.739873  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.746705  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.814194  164809 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0617 12:02:31.814235  164809 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.814273  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.849549  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.950803  164809 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0617 12:02:31.950858  164809 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.950907  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.950934  164809 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0617 12:02:31.950959  164809 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.950992  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951005  164809 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0617 12:02:31.951030  164809 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0617 12:02:31.951053  164809 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.951090  164809 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0617 12:02:31.951103  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951113  164809 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.951146  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951053  164809 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.951179  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951217  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.951266  164809 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0617 12:02:31.951289  164809 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.951319  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.967596  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.967802  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:32.018505  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:32.018542  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:32.018623  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:32.018664  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0617 12:02:32.018738  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:32.018755  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0617 12:02:32.026154  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0617 12:02:32.026270  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0617 12:02:32.046161  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0617 12:02:32.046288  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0617 12:02:32.126665  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0617 12:02:32.126755  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0617 12:02:32.126765  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0617 12:02:32.126814  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0617 12:02:32.126829  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0617 12:02:32.126867  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0617 12:02:32.126898  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0617 12:02:32.126911  164809 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0617 12:02:32.126935  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0617 12:02:32.126965  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0617 12:02:32.127008  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0617 12:02:32.127058  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0617 12:02:32.127060  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0617 12:02:32.142790  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0617 12:02:32.142816  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0617 12:02:32.143132  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0617 12:02:32.915885  166103 node_ready.go:49] node "default-k8s-diff-port-991309" has status "Ready":"True"
	I0617 12:02:32.915912  166103 node_ready.go:38] duration metric: took 7.503979113s for node "default-k8s-diff-port-991309" to be "Ready" ...
	I0617 12:02:32.915924  166103 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:32.921198  166103 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:34.927290  166103 pod_ready.go:102] pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:33.126753  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:33.627017  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:34.126558  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:34.626976  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.126410  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.627309  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:36.126958  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:36.626349  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:37.126815  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:37.627332  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.724326  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:37.725145  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:36.125679  164809 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1: (3.998551072s)
	I0617 12:02:36.125727  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0617 12:02:36.125773  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.998809852s)
	I0617 12:02:36.125804  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0617 12:02:36.125838  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0617 12:02:36.125894  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0617 12:02:37.885028  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.759100554s)
	I0617 12:02:37.885054  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0617 12:02:37.885073  164809 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0617 12:02:37.885122  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0617 12:02:37.429419  166103 pod_ready.go:102] pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:39.933476  166103 pod_ready.go:92] pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.933508  166103 pod_ready.go:81] duration metric: took 7.012285571s for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.933521  166103 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.940139  166103 pod_ready.go:92] pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.940162  166103 pod_ready.go:81] duration metric: took 6.633405ms for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.940175  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.945285  166103 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.945305  166103 pod_ready.go:81] duration metric: took 5.12303ms for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.945317  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.950992  166103 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.951021  166103 pod_ready.go:81] duration metric: took 5.6962ms for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.951034  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.955874  166103 pod_ready.go:92] pod "kube-proxy-jn5kp" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.955894  166103 pod_ready.go:81] duration metric: took 4.852842ms for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.955905  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:40.327000  166103 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:40.327035  166103 pod_ready.go:81] duration metric: took 371.121545ms for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:40.327049  166103 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:42.334620  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:38.126868  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:38.627367  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.127148  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.626571  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:40.126379  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:40.626747  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:41.126485  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:41.626372  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:42.126904  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:42.627293  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.727666  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:42.223700  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:39.992863  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.10770953s)
	I0617 12:02:39.992903  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0617 12:02:39.992934  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0617 12:02:39.992989  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0617 12:02:41.851420  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.858400961s)
	I0617 12:02:41.851452  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0617 12:02:41.851508  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0617 12:02:41.851578  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0617 12:02:44.833842  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:46.834443  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:43.127137  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:43.626521  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.127017  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.626824  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:45.126475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:45.626535  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:46.127423  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:46.626605  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:47.127029  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:47.627431  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.224685  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:46.225071  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:44.211669  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.360046418s)
	I0617 12:02:44.211702  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0617 12:02:44.211726  164809 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0617 12:02:44.211795  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0617 12:02:45.162389  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0617 12:02:45.162456  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0617 12:02:45.162542  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0617 12:02:47.414088  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.251500525s)
	I0617 12:02:47.414130  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0617 12:02:47.414164  164809 cache_images.go:123] Successfully loaded all cached images
	I0617 12:02:47.414172  164809 cache_images.go:92] duration metric: took 15.867782566s to LoadCachedImages
	I0617 12:02:47.414195  164809 kubeadm.go:928] updating node { 192.168.39.173 8443 v1.30.1 crio true true} ...
	I0617 12:02:47.414359  164809 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-152830 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-152830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:02:47.414451  164809 ssh_runner.go:195] Run: crio config
	I0617 12:02:47.466472  164809 cni.go:84] Creating CNI manager for ""
	I0617 12:02:47.466493  164809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:47.466503  164809 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:02:47.466531  164809 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.173 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-152830 NodeName:no-preload-152830 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:02:47.466716  164809 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-152830"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.173"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:02:47.466793  164809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:02:47.478163  164809 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:02:47.478255  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:02:47.488014  164809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0617 12:02:47.505143  164809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:02:47.522481  164809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0617 12:02:47.545714  164809 ssh_runner.go:195] Run: grep 192.168.39.173	control-plane.minikube.internal$ /etc/hosts
	I0617 12:02:47.551976  164809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.173	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:47.565374  164809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:47.694699  164809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:02:47.714017  164809 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830 for IP: 192.168.39.173
	I0617 12:02:47.714044  164809 certs.go:194] generating shared ca certs ...
	I0617 12:02:47.714064  164809 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:47.714260  164809 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:02:47.714321  164809 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:02:47.714335  164809 certs.go:256] generating profile certs ...
	I0617 12:02:47.714419  164809 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/client.key
	I0617 12:02:47.714504  164809 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/apiserver.key.d2d5b47b
	I0617 12:02:47.714547  164809 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/proxy-client.key
	I0617 12:02:47.714655  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:02:47.714684  164809 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:02:47.714693  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:02:47.714719  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:02:47.714745  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:02:47.714780  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:02:47.714815  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:47.715578  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:02:47.767301  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:02:47.804542  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:02:47.842670  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:02:47.874533  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0617 12:02:47.909752  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:02:47.940097  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:02:47.965441  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:02:47.990862  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:02:48.015935  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:02:48.041408  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:02:48.066557  164809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:02:48.084630  164809 ssh_runner.go:195] Run: openssl version
	I0617 12:02:48.091098  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:02:48.102447  164809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:02:48.107238  164809 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:02:48.107299  164809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:02:48.113682  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:02:48.124472  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:02:48.135897  164809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:02:48.140859  164809 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:02:48.140915  164809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:02:48.147113  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:02:48.158192  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:02:48.169483  164809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:48.174241  164809 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:48.174294  164809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:48.180093  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:02:48.191082  164809 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:02:48.195770  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:02:48.201743  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:02:48.207452  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:02:48.213492  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:02:48.219435  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:02:48.226202  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:02:48.232291  164809 kubeadm.go:391] StartCluster: {Name:no-preload-152830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-152830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:02:48.232409  164809 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:02:48.232448  164809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:48.272909  164809 cri.go:89] found id: ""
	I0617 12:02:48.272972  164809 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:02:48.284185  164809 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:02:48.284212  164809 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:02:48.284221  164809 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:02:48.284266  164809 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:02:48.294653  164809 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:02:48.296091  164809 kubeconfig.go:125] found "no-preload-152830" server: "https://192.168.39.173:8443"
	I0617 12:02:48.298438  164809 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:02:48.307905  164809 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.173
	I0617 12:02:48.307932  164809 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:02:48.307945  164809 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:02:48.307990  164809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:48.356179  164809 cri.go:89] found id: ""
	I0617 12:02:48.356247  164809 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:02:49.333637  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:51.333927  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:48.127215  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:48.627013  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:49.126439  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:49.626831  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.126521  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.627178  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:51.126830  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:51.627091  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:52.127343  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:52.626635  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:48.724828  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:51.225321  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:48.377824  164809 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:02:48.389213  164809 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:02:48.389236  164809 kubeadm.go:156] found existing configuration files:
	
	I0617 12:02:48.389287  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:02:48.398559  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:02:48.398605  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:02:48.408243  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:02:48.417407  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:02:48.417451  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:02:48.427333  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:02:48.436224  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:02:48.436278  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:02:48.445378  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:02:48.454119  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:02:48.454170  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:02:48.463097  164809 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:02:48.472479  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:48.584018  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.392310  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.599840  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.662845  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.794357  164809 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:49.794459  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.295507  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.794968  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.832967  164809 api_server.go:72] duration metric: took 1.038610813s to wait for apiserver process to appear ...
	I0617 12:02:50.832993  164809 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:02:50.833017  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:50.833494  164809 api_server.go:269] stopped: https://192.168.39.173:8443/healthz: Get "https://192.168.39.173:8443/healthz": dial tcp 192.168.39.173:8443: connect: connection refused
	I0617 12:02:51.333910  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:53.534213  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:53.534246  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:53.534265  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:53.579857  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:53.579887  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:53.833207  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:53.863430  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:53.863485  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:54.333557  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:54.342474  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:54.342507  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:54.834092  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:54.839578  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0617 12:02:54.854075  164809 api_server.go:141] control plane version: v1.30.1
	I0617 12:02:54.854113  164809 api_server.go:131] duration metric: took 4.021112065s to wait for apiserver health ...
	I0617 12:02:54.854124  164809 cni.go:84] Creating CNI manager for ""
	I0617 12:02:54.854133  164809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:54.856029  164809 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:02:53.334898  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:55.834490  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:53.126693  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:53.627110  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:54.126653  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:54.626424  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:55.127113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:55.627373  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:56.126415  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:56.627329  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:57.126797  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:57.627313  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:53.723948  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:56.225000  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:54.857252  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:02:54.914636  164809 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:02:54.961745  164809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:02:54.975140  164809 system_pods.go:59] 8 kube-system pods found
	I0617 12:02:54.975183  164809 system_pods.go:61] "coredns-7db6d8ff4d-7lfns" [83cf7962-1aa7-4de6-9e77-a03dee972ead] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0617 12:02:54.975192  164809 system_pods.go:61] "etcd-no-preload-152830" [27dace2b-9d7d-44e8-8f86-b20ce49c8afa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0617 12:02:54.975202  164809 system_pods.go:61] "kube-apiserver-no-preload-152830" [c102caaf-2289-4171-8b1f-89df4f6edf39] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 12:02:54.975213  164809 system_pods.go:61] "kube-controller-manager-no-preload-152830" [534a8f45-7886-4e12-b728-df686c2f8668] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 12:02:54.975220  164809 system_pods.go:61] "kube-proxy-bblgc" [70fa474e-cb6a-4e31-b978-78b47e9952a8] Running
	I0617 12:02:54.975228  164809 system_pods.go:61] "kube-scheduler-no-preload-152830" [17d696bd-55b3-4080-a63d-944216adf1d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 12:02:54.975240  164809 system_pods.go:61] "metrics-server-569cc877fc-97tqn" [0ce37c88-fd22-4001-96c4-d0f5239c0fd4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:02:54.975253  164809 system_pods.go:61] "storage-provisioner" [61dafb85-965b-4961-b9e1-e3202795caef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 12:02:54.975268  164809 system_pods.go:74] duration metric: took 13.492652ms to wait for pod list to return data ...
	I0617 12:02:54.975279  164809 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:02:54.980820  164809 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:02:54.980842  164809 node_conditions.go:123] node cpu capacity is 2
	I0617 12:02:54.980854  164809 node_conditions.go:105] duration metric: took 5.568037ms to run NodePressure ...
	I0617 12:02:54.980873  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:55.284669  164809 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 12:02:55.289433  164809 kubeadm.go:733] kubelet initialised
	I0617 12:02:55.289453  164809 kubeadm.go:734] duration metric: took 4.759785ms waiting for restarted kubelet to initialise ...
	I0617 12:02:55.289461  164809 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:55.294149  164809 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:55.298081  164809 pod_ready.go:97] node "no-preload-152830" hosting pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.298100  164809 pod_ready.go:81] duration metric: took 3.929974ms for pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:55.298109  164809 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-152830" hosting pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.298116  164809 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:55.302552  164809 pod_ready.go:97] node "no-preload-152830" hosting pod "etcd-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.302572  164809 pod_ready.go:81] duration metric: took 4.444579ms for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:55.302580  164809 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-152830" hosting pod "etcd-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.302585  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:55.306375  164809 pod_ready.go:97] node "no-preload-152830" hosting pod "kube-apiserver-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.306394  164809 pod_ready.go:81] duration metric: took 3.804134ms for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:55.306402  164809 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-152830" hosting pod "kube-apiserver-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.306407  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:57.313002  164809 pod_ready.go:102] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:57.834719  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:00.334129  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:58.126744  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:58.627050  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:59.127300  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:59.626694  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:00.127092  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:00.127182  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:00.166116  165698 cri.go:89] found id: ""
	I0617 12:03:00.166145  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.166153  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:00.166159  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:00.166208  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:00.200990  165698 cri.go:89] found id: ""
	I0617 12:03:00.201020  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.201029  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:00.201034  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:00.201086  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:00.236394  165698 cri.go:89] found id: ""
	I0617 12:03:00.236422  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.236430  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:00.236438  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:00.236496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:00.274257  165698 cri.go:89] found id: ""
	I0617 12:03:00.274285  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.274293  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:00.274299  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:00.274350  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:00.307425  165698 cri.go:89] found id: ""
	I0617 12:03:00.307452  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.307481  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:00.307490  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:00.307557  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:00.343420  165698 cri.go:89] found id: ""
	I0617 12:03:00.343446  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.343472  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:00.343480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:00.343541  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:00.378301  165698 cri.go:89] found id: ""
	I0617 12:03:00.378325  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.378333  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:00.378338  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:00.378383  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:00.414985  165698 cri.go:89] found id: ""
	I0617 12:03:00.415011  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.415018  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:00.415033  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:00.415090  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:00.468230  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:00.468262  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:00.481970  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:00.481998  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:00.612881  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:00.612911  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:00.612929  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:00.676110  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:00.676145  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:02:58.725617  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:01.225227  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:59.818063  164809 pod_ready.go:102] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:02.312898  164809 pod_ready.go:102] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:03.313300  164809 pod_ready.go:92] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:03:03.313332  164809 pod_ready.go:81] duration metric: took 8.006915719s for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:03.313347  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bblgc" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:03.319094  164809 pod_ready.go:92] pod "kube-proxy-bblgc" in "kube-system" namespace has status "Ready":"True"
	I0617 12:03:03.319116  164809 pod_ready.go:81] duration metric: took 5.762584ms for pod "kube-proxy-bblgc" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:03.319137  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:02.833031  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:04.834158  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:07.334894  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:03.216960  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:03.231208  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:03.231277  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:03.267056  165698 cri.go:89] found id: ""
	I0617 12:03:03.267088  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.267096  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:03.267103  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:03.267152  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:03.302797  165698 cri.go:89] found id: ""
	I0617 12:03:03.302832  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.302844  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:03.302852  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:03.302905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:03.343401  165698 cri.go:89] found id: ""
	I0617 12:03:03.343435  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.343445  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:03.343465  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:03.343530  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:03.380841  165698 cri.go:89] found id: ""
	I0617 12:03:03.380871  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.380883  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:03.380890  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:03.380951  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:03.420098  165698 cri.go:89] found id: ""
	I0617 12:03:03.420130  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.420142  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:03.420150  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:03.420213  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:03.458476  165698 cri.go:89] found id: ""
	I0617 12:03:03.458506  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.458515  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:03.458521  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:03.458586  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:03.497127  165698 cri.go:89] found id: ""
	I0617 12:03:03.497156  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.497164  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:03.497170  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:03.497217  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:03.538759  165698 cri.go:89] found id: ""
	I0617 12:03:03.538794  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.538806  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:03.538825  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:03.538841  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:03.584701  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:03.584743  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:03.636981  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:03.637030  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:03.670032  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:03.670077  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:03.757012  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:03.757038  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:03.757056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:06.327680  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:06.341998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:06.342068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:06.383353  165698 cri.go:89] found id: ""
	I0617 12:03:06.383385  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.383394  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:06.383400  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:06.383448  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:06.418806  165698 cri.go:89] found id: ""
	I0617 12:03:06.418850  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.418862  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:06.418870  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:06.418945  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:06.458151  165698 cri.go:89] found id: ""
	I0617 12:03:06.458192  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.458204  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:06.458219  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:06.458289  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:06.496607  165698 cri.go:89] found id: ""
	I0617 12:03:06.496637  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.496645  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:06.496651  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:06.496703  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:06.534900  165698 cri.go:89] found id: ""
	I0617 12:03:06.534938  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.534951  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:06.534959  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:06.535017  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:06.572388  165698 cri.go:89] found id: ""
	I0617 12:03:06.572413  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.572422  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:06.572428  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:06.572496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:06.608072  165698 cri.go:89] found id: ""
	I0617 12:03:06.608104  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.608115  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:06.608121  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:06.608175  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:06.647727  165698 cri.go:89] found id: ""
	I0617 12:03:06.647760  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.647772  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:06.647784  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:06.647800  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:06.720887  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:06.720919  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:06.761128  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:06.761153  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:06.815524  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:06.815557  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:06.830275  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:06.830304  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:06.907861  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:03.725650  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:06.225601  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:05.327062  164809 pod_ready.go:102] pod "kube-scheduler-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:07.325033  164809 pod_ready.go:92] pod "kube-scheduler-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:03:07.325061  164809 pod_ready.go:81] duration metric: took 4.005914462s for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:07.325072  164809 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:09.835374  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:12.334481  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:09.408117  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:09.420916  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:09.420978  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:09.453830  165698 cri.go:89] found id: ""
	I0617 12:03:09.453860  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.453870  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:09.453878  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:09.453937  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:09.492721  165698 cri.go:89] found id: ""
	I0617 12:03:09.492756  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.492766  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:09.492775  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:09.492849  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:09.530956  165698 cri.go:89] found id: ""
	I0617 12:03:09.530984  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.530995  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:09.531001  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:09.531067  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:09.571534  165698 cri.go:89] found id: ""
	I0617 12:03:09.571564  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.571576  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:09.571584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:09.571646  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:09.609740  165698 cri.go:89] found id: ""
	I0617 12:03:09.609776  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.609788  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:09.609797  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:09.609864  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:09.649958  165698 cri.go:89] found id: ""
	I0617 12:03:09.649998  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.650010  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:09.650020  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:09.650087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:09.706495  165698 cri.go:89] found id: ""
	I0617 12:03:09.706532  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.706544  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:09.706553  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:09.706638  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:09.742513  165698 cri.go:89] found id: ""
	I0617 12:03:09.742541  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.742549  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:09.742559  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:09.742571  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:09.756470  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:09.756502  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:09.840878  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:09.840897  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:09.840913  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:09.922329  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:09.922370  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:09.967536  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:09.967573  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:12.521031  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:12.534507  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:12.534595  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:12.569895  165698 cri.go:89] found id: ""
	I0617 12:03:12.569930  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.569942  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:12.569950  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:12.570005  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:12.606857  165698 cri.go:89] found id: ""
	I0617 12:03:12.606888  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.606900  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:12.606922  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:12.606998  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:12.640781  165698 cri.go:89] found id: ""
	I0617 12:03:12.640807  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.640818  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:12.640826  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:12.640910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:12.674097  165698 cri.go:89] found id: ""
	I0617 12:03:12.674124  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.674134  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:12.674142  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:12.674201  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:12.708662  165698 cri.go:89] found id: ""
	I0617 12:03:12.708689  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.708699  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:12.708707  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:12.708791  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:12.744891  165698 cri.go:89] found id: ""
	I0617 12:03:12.744927  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.744938  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:12.744947  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:12.745010  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:12.778440  165698 cri.go:89] found id: ""
	I0617 12:03:12.778466  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.778474  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:12.778480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:12.778528  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:12.814733  165698 cri.go:89] found id: ""
	I0617 12:03:12.814762  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.814770  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:12.814780  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:12.814820  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:12.887741  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:12.887762  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:12.887775  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:12.968439  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:12.968476  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:08.725485  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:11.224357  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:09.331004  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:11.331666  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:13.332269  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:14.335086  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:16.836397  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:13.008926  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:13.008955  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:13.060432  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:13.060468  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:15.575450  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:15.589178  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:15.589244  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:15.625554  165698 cri.go:89] found id: ""
	I0617 12:03:15.625589  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.625601  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:15.625608  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:15.625668  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:15.659023  165698 cri.go:89] found id: ""
	I0617 12:03:15.659054  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.659066  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:15.659074  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:15.659138  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:15.693777  165698 cri.go:89] found id: ""
	I0617 12:03:15.693803  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.693811  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:15.693817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:15.693875  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:15.729098  165698 cri.go:89] found id: ""
	I0617 12:03:15.729133  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.729141  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:15.729147  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:15.729194  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:15.762639  165698 cri.go:89] found id: ""
	I0617 12:03:15.762668  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.762679  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:15.762687  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:15.762744  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:15.797446  165698 cri.go:89] found id: ""
	I0617 12:03:15.797475  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.797484  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:15.797489  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:15.797537  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:15.832464  165698 cri.go:89] found id: ""
	I0617 12:03:15.832503  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.832513  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:15.832521  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:15.832579  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:15.867868  165698 cri.go:89] found id: ""
	I0617 12:03:15.867898  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.867906  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:15.867916  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:15.867928  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:15.882151  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:15.882181  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:15.946642  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:15.946666  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:15.946682  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:16.027062  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:16.027098  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:16.082704  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:16.082735  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:13.725854  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:16.225670  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:15.333470  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:17.832368  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:19.334102  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:21.334529  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:18.651554  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:18.665096  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:18.665166  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:18.703099  165698 cri.go:89] found id: ""
	I0617 12:03:18.703127  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.703138  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:18.703147  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:18.703210  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:18.737945  165698 cri.go:89] found id: ""
	I0617 12:03:18.737985  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.737997  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:18.738005  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:18.738079  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:18.777145  165698 cri.go:89] found id: ""
	I0617 12:03:18.777172  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.777181  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:18.777187  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:18.777255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:18.813171  165698 cri.go:89] found id: ""
	I0617 12:03:18.813198  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.813207  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:18.813213  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:18.813270  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:18.854459  165698 cri.go:89] found id: ""
	I0617 12:03:18.854490  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.854501  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:18.854510  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:18.854607  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:18.893668  165698 cri.go:89] found id: ""
	I0617 12:03:18.893703  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.893712  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:18.893718  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:18.893796  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:18.928919  165698 cri.go:89] found id: ""
	I0617 12:03:18.928971  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.928983  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:18.928993  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:18.929068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:18.965770  165698 cri.go:89] found id: ""
	I0617 12:03:18.965800  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.965808  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:18.965817  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:18.965829  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:19.020348  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:19.020392  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:19.034815  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:19.034845  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:19.109617  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:19.109643  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:19.109660  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:19.186843  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:19.186890  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:21.732720  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:21.747032  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:21.747113  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:21.789962  165698 cri.go:89] found id: ""
	I0617 12:03:21.789991  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.789999  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:21.790011  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:21.790066  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:21.833865  165698 cri.go:89] found id: ""
	I0617 12:03:21.833903  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.833913  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:21.833921  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:21.833985  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:21.903891  165698 cri.go:89] found id: ""
	I0617 12:03:21.903929  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.903941  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:21.903950  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:21.904020  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:21.941369  165698 cri.go:89] found id: ""
	I0617 12:03:21.941396  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.941407  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:21.941415  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:21.941473  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:21.977767  165698 cri.go:89] found id: ""
	I0617 12:03:21.977797  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.977808  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:21.977817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:21.977880  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:22.016422  165698 cri.go:89] found id: ""
	I0617 12:03:22.016450  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.016463  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:22.016471  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:22.016536  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:22.056871  165698 cri.go:89] found id: ""
	I0617 12:03:22.056904  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.056914  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:22.056922  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:22.056982  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:22.093244  165698 cri.go:89] found id: ""
	I0617 12:03:22.093288  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.093300  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:22.093313  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:22.093331  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:22.144722  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:22.144756  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:22.159047  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:22.159084  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:22.232077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:22.232100  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:22.232112  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:22.308241  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:22.308276  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:18.724648  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:21.224616  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:19.832543  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:21.838952  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:23.834640  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:26.336770  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:24.851740  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:24.866597  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:24.866659  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:24.902847  165698 cri.go:89] found id: ""
	I0617 12:03:24.902879  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.902892  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:24.902900  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:24.902973  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:24.940042  165698 cri.go:89] found id: ""
	I0617 12:03:24.940079  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.940088  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:24.940094  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:24.940150  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:24.975160  165698 cri.go:89] found id: ""
	I0617 12:03:24.975190  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.975202  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:24.975211  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:24.975280  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:25.012618  165698 cri.go:89] found id: ""
	I0617 12:03:25.012649  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.012657  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:25.012663  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:25.012712  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:25.051166  165698 cri.go:89] found id: ""
	I0617 12:03:25.051210  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.051223  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:25.051230  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:25.051309  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:25.090112  165698 cri.go:89] found id: ""
	I0617 12:03:25.090144  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.090156  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:25.090164  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:25.090230  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:25.133258  165698 cri.go:89] found id: ""
	I0617 12:03:25.133285  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.133294  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:25.133301  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:25.133366  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:25.177445  165698 cri.go:89] found id: ""
	I0617 12:03:25.177473  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.177481  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:25.177490  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:25.177505  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:25.250685  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:25.250710  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:25.250727  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:25.335554  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:25.335586  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:25.377058  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:25.377093  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:25.431425  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:25.431471  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:27.945063  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:27.959396  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:27.959469  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:23.725126  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:26.224114  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:28.224895  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:23.840550  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:26.333142  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:28.334577  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:28.337133  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:30.834142  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:27.994554  165698 cri.go:89] found id: ""
	I0617 12:03:27.994582  165698 logs.go:276] 0 containers: []
	W0617 12:03:27.994591  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:27.994598  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:27.994660  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:28.030168  165698 cri.go:89] found id: ""
	I0617 12:03:28.030200  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.030208  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:28.030215  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:28.030263  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:28.066213  165698 cri.go:89] found id: ""
	I0617 12:03:28.066244  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.066255  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:28.066261  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:28.066322  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:28.102855  165698 cri.go:89] found id: ""
	I0617 12:03:28.102880  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.102888  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:28.102894  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:28.102942  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:28.138698  165698 cri.go:89] found id: ""
	I0617 12:03:28.138734  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.138748  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:28.138755  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:28.138815  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:28.173114  165698 cri.go:89] found id: ""
	I0617 12:03:28.173140  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.173148  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:28.173154  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:28.173213  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:28.208901  165698 cri.go:89] found id: ""
	I0617 12:03:28.208936  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.208947  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:28.208955  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:28.209016  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:28.244634  165698 cri.go:89] found id: ""
	I0617 12:03:28.244667  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.244678  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:28.244687  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:28.244699  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:28.300303  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:28.300336  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:28.314227  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:28.314272  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:28.394322  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:28.394350  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:28.394367  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:28.483381  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:28.483413  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:31.026433  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:31.040820  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:31.040888  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:31.086409  165698 cri.go:89] found id: ""
	I0617 12:03:31.086440  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.086453  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:31.086461  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:31.086548  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:31.122810  165698 cri.go:89] found id: ""
	I0617 12:03:31.122836  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.122843  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:31.122849  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:31.122910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:31.157634  165698 cri.go:89] found id: ""
	I0617 12:03:31.157669  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.157680  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:31.157687  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:31.157750  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:31.191498  165698 cri.go:89] found id: ""
	I0617 12:03:31.191529  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.191541  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:31.191549  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:31.191619  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:31.225575  165698 cri.go:89] found id: ""
	I0617 12:03:31.225599  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.225609  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:31.225616  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:31.225670  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:31.269780  165698 cri.go:89] found id: ""
	I0617 12:03:31.269810  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.269819  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:31.269825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:31.269874  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:31.307689  165698 cri.go:89] found id: ""
	I0617 12:03:31.307717  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.307726  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:31.307733  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:31.307789  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:31.344160  165698 cri.go:89] found id: ""
	I0617 12:03:31.344190  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.344200  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:31.344210  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:31.344223  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:31.397627  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:31.397667  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:31.411316  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:31.411347  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:31.486258  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:31.486280  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:31.486297  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:31.568067  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:31.568106  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:30.725183  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:33.224294  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:30.834377  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:33.333070  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:33.335067  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:35.335626  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:37.336117  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:34.111424  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:34.127178  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:34.127255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:34.165900  165698 cri.go:89] found id: ""
	I0617 12:03:34.165936  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.165947  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:34.165955  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:34.166042  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:34.203556  165698 cri.go:89] found id: ""
	I0617 12:03:34.203588  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.203597  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:34.203606  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:34.203659  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:34.243418  165698 cri.go:89] found id: ""
	I0617 12:03:34.243478  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.243490  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:34.243499  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:34.243661  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:34.281542  165698 cri.go:89] found id: ""
	I0617 12:03:34.281569  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.281577  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:34.281582  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:34.281635  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:34.316304  165698 cri.go:89] found id: ""
	I0617 12:03:34.316333  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.316341  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:34.316347  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:34.316403  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:34.357416  165698 cri.go:89] found id: ""
	I0617 12:03:34.357455  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.357467  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:34.357476  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:34.357547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:34.392069  165698 cri.go:89] found id: ""
	I0617 12:03:34.392101  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.392112  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:34.392120  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:34.392185  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:34.427203  165698 cri.go:89] found id: ""
	I0617 12:03:34.427235  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.427247  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:34.427258  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:34.427317  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:34.441346  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:34.441375  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:34.519306  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:34.519331  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:34.519349  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:34.598802  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:34.598843  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:34.637521  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:34.637554  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:37.191259  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:37.205882  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:37.205947  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:37.242175  165698 cri.go:89] found id: ""
	I0617 12:03:37.242202  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.242209  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:37.242215  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:37.242278  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:37.278004  165698 cri.go:89] found id: ""
	I0617 12:03:37.278029  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.278037  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:37.278043  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:37.278091  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:37.322148  165698 cri.go:89] found id: ""
	I0617 12:03:37.322179  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.322190  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:37.322198  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:37.322259  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:37.358612  165698 cri.go:89] found id: ""
	I0617 12:03:37.358638  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.358649  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:37.358657  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:37.358718  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:37.393070  165698 cri.go:89] found id: ""
	I0617 12:03:37.393104  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.393115  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:37.393123  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:37.393187  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:37.429420  165698 cri.go:89] found id: ""
	I0617 12:03:37.429452  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.429465  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:37.429475  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:37.429541  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:37.464485  165698 cri.go:89] found id: ""
	I0617 12:03:37.464509  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.464518  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:37.464523  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:37.464584  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:37.501283  165698 cri.go:89] found id: ""
	I0617 12:03:37.501308  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.501316  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:37.501326  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:37.501338  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:37.552848  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:37.552889  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:37.566715  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:37.566746  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:37.643560  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:37.643584  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:37.643601  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:37.722895  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:37.722935  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:35.225442  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:37.225962  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:35.836693  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:38.332297  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:39.834655  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:42.333686  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:40.268199  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:40.281832  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:40.281905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:40.317094  165698 cri.go:89] found id: ""
	I0617 12:03:40.317137  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.317150  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:40.317159  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:40.317229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:40.355786  165698 cri.go:89] found id: ""
	I0617 12:03:40.355819  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.355829  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:40.355836  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:40.355903  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:40.394282  165698 cri.go:89] found id: ""
	I0617 12:03:40.394312  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.394323  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:40.394332  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:40.394388  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:40.433773  165698 cri.go:89] found id: ""
	I0617 12:03:40.433806  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.433817  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:40.433825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:40.433875  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:40.469937  165698 cri.go:89] found id: ""
	I0617 12:03:40.469973  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.469985  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:40.469998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:40.470067  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:40.503565  165698 cri.go:89] found id: ""
	I0617 12:03:40.503590  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.503599  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:40.503605  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:40.503654  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:40.538349  165698 cri.go:89] found id: ""
	I0617 12:03:40.538383  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.538394  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:40.538402  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:40.538461  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:40.576036  165698 cri.go:89] found id: ""
	I0617 12:03:40.576066  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.576075  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:40.576085  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:40.576100  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:40.617804  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:40.617833  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:40.668126  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:40.668162  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:40.682618  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:40.682655  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:40.759597  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:40.759619  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:40.759638  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:39.725534  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:42.223320  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:40.336855  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:42.832597  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:44.334430  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:46.835809  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:43.343404  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:43.357886  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:43.357953  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:43.398262  165698 cri.go:89] found id: ""
	I0617 12:03:43.398290  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.398301  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:43.398310  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:43.398370  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:43.432241  165698 cri.go:89] found id: ""
	I0617 12:03:43.432272  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.432280  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:43.432289  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:43.432348  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:43.466210  165698 cri.go:89] found id: ""
	I0617 12:03:43.466234  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.466241  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:43.466247  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:43.466294  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:43.501677  165698 cri.go:89] found id: ""
	I0617 12:03:43.501711  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.501723  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:43.501731  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:43.501793  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:43.541826  165698 cri.go:89] found id: ""
	I0617 12:03:43.541860  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.541870  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:43.541876  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:43.541941  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:43.576940  165698 cri.go:89] found id: ""
	I0617 12:03:43.576962  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.576970  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:43.576975  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:43.577022  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:43.612592  165698 cri.go:89] found id: ""
	I0617 12:03:43.612627  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.612635  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:43.612643  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:43.612694  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:43.647141  165698 cri.go:89] found id: ""
	I0617 12:03:43.647176  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.647188  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:43.647202  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:43.647220  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:43.698248  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:43.698283  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:43.711686  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:43.711714  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:43.787077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:43.787101  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:43.787115  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:43.861417  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:43.861455  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:46.402594  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:46.417108  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:46.417185  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:46.453910  165698 cri.go:89] found id: ""
	I0617 12:03:46.453941  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.453952  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:46.453960  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:46.454020  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:46.487239  165698 cri.go:89] found id: ""
	I0617 12:03:46.487268  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.487280  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:46.487288  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:46.487353  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:46.521824  165698 cri.go:89] found id: ""
	I0617 12:03:46.521850  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.521859  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:46.521866  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:46.521929  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:46.557247  165698 cri.go:89] found id: ""
	I0617 12:03:46.557274  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.557282  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:46.557289  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:46.557350  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:46.600354  165698 cri.go:89] found id: ""
	I0617 12:03:46.600383  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.600393  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:46.600402  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:46.600477  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:46.638153  165698 cri.go:89] found id: ""
	I0617 12:03:46.638180  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.638189  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:46.638197  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:46.638255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:46.672636  165698 cri.go:89] found id: ""
	I0617 12:03:46.672661  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.672669  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:46.672675  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:46.672721  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:46.706431  165698 cri.go:89] found id: ""
	I0617 12:03:46.706468  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.706481  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:46.706493  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:46.706509  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:46.720796  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:46.720842  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:46.801343  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:46.801365  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:46.801379  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:46.883651  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:46.883696  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:46.928594  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:46.928630  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:44.224037  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:46.224076  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:48.224472  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:45.332811  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:47.832461  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:49.334678  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:51.833994  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:49.480413  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:49.495558  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:49.495656  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:49.533281  165698 cri.go:89] found id: ""
	I0617 12:03:49.533313  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.533323  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:49.533330  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:49.533396  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:49.573430  165698 cri.go:89] found id: ""
	I0617 12:03:49.573457  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.573465  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:49.573472  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:49.573532  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:49.608669  165698 cri.go:89] found id: ""
	I0617 12:03:49.608697  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.608705  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:49.608711  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:49.608767  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:49.643411  165698 cri.go:89] found id: ""
	I0617 12:03:49.643449  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.643481  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:49.643490  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:49.643557  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:49.680039  165698 cri.go:89] found id: ""
	I0617 12:03:49.680071  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.680082  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:49.680090  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:49.680148  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:49.717169  165698 cri.go:89] found id: ""
	I0617 12:03:49.717195  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.717203  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:49.717209  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:49.717262  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:49.754585  165698 cri.go:89] found id: ""
	I0617 12:03:49.754615  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.754625  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:49.754633  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:49.754697  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:49.796040  165698 cri.go:89] found id: ""
	I0617 12:03:49.796074  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.796085  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:49.796097  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:49.796112  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:49.873496  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:49.873530  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:49.873547  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:49.961883  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:49.961925  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:50.002975  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:50.003004  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:50.054185  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:50.054224  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:52.568557  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:52.584264  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:52.584337  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:52.622474  165698 cri.go:89] found id: ""
	I0617 12:03:52.622501  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.622509  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:52.622516  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:52.622566  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:52.661012  165698 cri.go:89] found id: ""
	I0617 12:03:52.661045  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.661057  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:52.661066  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:52.661133  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:52.700950  165698 cri.go:89] found id: ""
	I0617 12:03:52.700986  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.700998  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:52.701006  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:52.701075  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:52.735663  165698 cri.go:89] found id: ""
	I0617 12:03:52.735689  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.735696  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:52.735702  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:52.735768  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:52.776540  165698 cri.go:89] found id: ""
	I0617 12:03:52.776568  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.776580  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:52.776589  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:52.776642  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:52.812439  165698 cri.go:89] found id: ""
	I0617 12:03:52.812474  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.812493  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:52.812503  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:52.812567  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:52.849233  165698 cri.go:89] found id: ""
	I0617 12:03:52.849263  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.849273  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:52.849281  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:52.849343  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:52.885365  165698 cri.go:89] found id: ""
	I0617 12:03:52.885395  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.885406  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:52.885419  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:52.885434  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:52.941521  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:52.941553  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:52.955958  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:52.955997  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:03:50.224702  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:52.724247  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:50.332871  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:52.832386  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:53.834382  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:55.834813  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	W0617 12:03:53.029254  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:53.029278  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:53.029291  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:53.104391  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:53.104425  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:55.648578  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:55.662143  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:55.662205  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:55.697623  165698 cri.go:89] found id: ""
	I0617 12:03:55.697662  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.697674  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:55.697682  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:55.697751  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:55.734132  165698 cri.go:89] found id: ""
	I0617 12:03:55.734171  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.734184  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:55.734192  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:55.734265  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:55.774178  165698 cri.go:89] found id: ""
	I0617 12:03:55.774212  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.774222  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:55.774231  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:55.774296  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:55.816427  165698 cri.go:89] found id: ""
	I0617 12:03:55.816460  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.816471  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:55.816480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:55.816546  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:55.860413  165698 cri.go:89] found id: ""
	I0617 12:03:55.860446  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.860457  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:55.860465  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:55.860532  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:55.897577  165698 cri.go:89] found id: ""
	I0617 12:03:55.897612  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.897622  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:55.897629  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:55.897682  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:55.934163  165698 cri.go:89] found id: ""
	I0617 12:03:55.934200  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.934212  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:55.934220  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:55.934291  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:55.972781  165698 cri.go:89] found id: ""
	I0617 12:03:55.972827  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.972840  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:55.972852  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:55.972867  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:56.027292  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:56.027332  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:56.042304  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:56.042336  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:56.115129  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:56.115159  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:56.115176  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:56.194161  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:56.194200  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:54.728169  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:57.225361  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:54.837170  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:57.333566  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:58.335846  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:00.833987  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:58.734681  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:58.748467  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:58.748534  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:58.786191  165698 cri.go:89] found id: ""
	I0617 12:03:58.786221  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.786232  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:58.786239  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:58.786302  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:58.822076  165698 cri.go:89] found id: ""
	I0617 12:03:58.822103  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.822125  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:58.822134  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:58.822199  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:58.858830  165698 cri.go:89] found id: ""
	I0617 12:03:58.858859  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.858867  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:58.858873  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:58.858927  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:58.898802  165698 cri.go:89] found id: ""
	I0617 12:03:58.898830  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.898838  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:58.898844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:58.898891  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:58.933234  165698 cri.go:89] found id: ""
	I0617 12:03:58.933269  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.933281  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:58.933289  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:58.933355  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:58.973719  165698 cri.go:89] found id: ""
	I0617 12:03:58.973753  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.973766  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:58.973773  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:58.973847  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:59.010671  165698 cri.go:89] found id: ""
	I0617 12:03:59.010722  165698 logs.go:276] 0 containers: []
	W0617 12:03:59.010734  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:59.010741  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:59.010805  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:59.047318  165698 cri.go:89] found id: ""
	I0617 12:03:59.047347  165698 logs.go:276] 0 containers: []
	W0617 12:03:59.047359  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:59.047372  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:59.047389  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:59.097778  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:59.097815  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:59.111615  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:59.111646  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:59.193172  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:59.193195  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:59.193207  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:59.268147  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:59.268182  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:01.807585  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:01.821634  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:01.821694  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:01.857610  165698 cri.go:89] found id: ""
	I0617 12:04:01.857637  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.857647  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:01.857654  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:01.857710  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:01.893229  165698 cri.go:89] found id: ""
	I0617 12:04:01.893253  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.893261  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:01.893267  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:01.893324  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:01.926916  165698 cri.go:89] found id: ""
	I0617 12:04:01.926940  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.926950  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:01.926958  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:01.927017  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:01.961913  165698 cri.go:89] found id: ""
	I0617 12:04:01.961946  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.961957  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:01.961967  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:01.962045  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:01.997084  165698 cri.go:89] found id: ""
	I0617 12:04:01.997111  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.997119  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:01.997125  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:01.997173  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:02.034640  165698 cri.go:89] found id: ""
	I0617 12:04:02.034666  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.034674  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:02.034680  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:02.034744  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:02.085868  165698 cri.go:89] found id: ""
	I0617 12:04:02.085910  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.085920  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:02.085928  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:02.085983  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:02.152460  165698 cri.go:89] found id: ""
	I0617 12:04:02.152487  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.152499  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:02.152513  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:02.152528  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:02.205297  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:02.205344  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:02.222312  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:02.222348  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:02.299934  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:02.299959  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:02.299977  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:02.384008  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:02.384056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:59.724730  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:02.227215  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:59.833621  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:01.833799  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:02.834076  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:04.836418  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:07.335024  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:04.926889  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:04.940643  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:04.940722  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:04.976246  165698 cri.go:89] found id: ""
	I0617 12:04:04.976275  165698 logs.go:276] 0 containers: []
	W0617 12:04:04.976283  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:04.976289  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:04.976338  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:05.015864  165698 cri.go:89] found id: ""
	I0617 12:04:05.015900  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.015913  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:05.015921  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:05.015985  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:05.054051  165698 cri.go:89] found id: ""
	I0617 12:04:05.054086  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.054099  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:05.054112  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:05.054177  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:05.090320  165698 cri.go:89] found id: ""
	I0617 12:04:05.090358  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.090371  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:05.090380  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:05.090438  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:05.126963  165698 cri.go:89] found id: ""
	I0617 12:04:05.126998  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.127008  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:05.127015  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:05.127087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:05.162565  165698 cri.go:89] found id: ""
	I0617 12:04:05.162600  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.162611  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:05.162620  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:05.162674  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:05.195706  165698 cri.go:89] found id: ""
	I0617 12:04:05.195743  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.195752  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:05.195758  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:05.195826  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:05.236961  165698 cri.go:89] found id: ""
	I0617 12:04:05.236995  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.237006  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:05.237016  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:05.237034  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:05.252754  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:05.252783  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:05.327832  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:05.327870  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:05.327886  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:05.410220  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:05.410271  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:05.451291  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:05.451324  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:04.725172  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:07.223627  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:04.332177  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:06.831712  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:09.834563  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:12.334095  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:08.003058  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:08.016611  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:08.016670  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:08.052947  165698 cri.go:89] found id: ""
	I0617 12:04:08.052984  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.052996  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:08.053004  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:08.053057  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:08.086668  165698 cri.go:89] found id: ""
	I0617 12:04:08.086695  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.086704  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:08.086711  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:08.086773  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:08.127708  165698 cri.go:89] found id: ""
	I0617 12:04:08.127738  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.127746  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:08.127752  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:08.127814  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:08.162930  165698 cri.go:89] found id: ""
	I0617 12:04:08.162959  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.162966  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:08.162973  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:08.163026  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:08.196757  165698 cri.go:89] found id: ""
	I0617 12:04:08.196782  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.196791  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:08.196797  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:08.196851  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:08.229976  165698 cri.go:89] found id: ""
	I0617 12:04:08.230006  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.230016  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:08.230022  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:08.230083  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:08.265969  165698 cri.go:89] found id: ""
	I0617 12:04:08.266000  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.266007  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:08.266013  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:08.266071  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:08.299690  165698 cri.go:89] found id: ""
	I0617 12:04:08.299717  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.299728  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:08.299741  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:08.299761  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:08.353399  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:08.353429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:08.366713  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:08.366739  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:08.442727  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:08.442768  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:08.442786  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:08.527832  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:08.527875  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:11.073616  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:11.087085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:11.087172  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:11.121706  165698 cri.go:89] found id: ""
	I0617 12:04:11.121745  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.121756  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:11.121765  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:11.121839  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:11.157601  165698 cri.go:89] found id: ""
	I0617 12:04:11.157637  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.157648  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:11.157657  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:11.157719  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:11.191929  165698 cri.go:89] found id: ""
	I0617 12:04:11.191963  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.191975  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:11.191983  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:11.192045  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:11.228391  165698 cri.go:89] found id: ""
	I0617 12:04:11.228416  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.228429  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:11.228437  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:11.228497  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:11.261880  165698 cri.go:89] found id: ""
	I0617 12:04:11.261911  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.261924  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:11.261932  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:11.261998  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:11.294615  165698 cri.go:89] found id: ""
	I0617 12:04:11.294663  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.294676  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:11.294684  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:11.294745  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:11.332813  165698 cri.go:89] found id: ""
	I0617 12:04:11.332840  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.332847  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:11.332854  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:11.332911  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:11.369032  165698 cri.go:89] found id: ""
	I0617 12:04:11.369060  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.369068  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:11.369078  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:11.369090  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:11.422522  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:11.422555  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:11.436961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:11.436990  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:11.508679  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:11.508700  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:11.508713  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:11.586574  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:11.586610  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:09.224727  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:11.225763  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:09.330868  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:11.332256  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:14.335171  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:16.836514  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:14.127034  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:14.143228  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:14.143306  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:14.178368  165698 cri.go:89] found id: ""
	I0617 12:04:14.178396  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.178405  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:14.178410  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:14.178459  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:14.209971  165698 cri.go:89] found id: ""
	I0617 12:04:14.210001  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.210010  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:14.210015  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:14.210065  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:14.244888  165698 cri.go:89] found id: ""
	I0617 12:04:14.244922  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.244933  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:14.244940  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:14.244999  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:14.277875  165698 cri.go:89] found id: ""
	I0617 12:04:14.277904  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.277914  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:14.277922  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:14.277983  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:14.312698  165698 cri.go:89] found id: ""
	I0617 12:04:14.312724  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.312733  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:14.312739  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:14.312789  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:14.350952  165698 cri.go:89] found id: ""
	I0617 12:04:14.350977  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.350987  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:14.350993  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:14.351056  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:14.389211  165698 cri.go:89] found id: ""
	I0617 12:04:14.389235  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.389243  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:14.389250  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:14.389297  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:14.426171  165698 cri.go:89] found id: ""
	I0617 12:04:14.426200  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.426211  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:14.426224  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:14.426240  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:14.500403  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:14.500430  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:14.500446  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:14.588041  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:14.588078  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:14.631948  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:14.631987  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:14.681859  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:14.681895  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:17.198754  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:17.212612  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:17.212679  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:17.251011  165698 cri.go:89] found id: ""
	I0617 12:04:17.251041  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.251056  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:17.251065  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:17.251128  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:17.282964  165698 cri.go:89] found id: ""
	I0617 12:04:17.282989  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.282998  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:17.283003  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:17.283060  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:17.315570  165698 cri.go:89] found id: ""
	I0617 12:04:17.315601  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.315622  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:17.315630  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:17.315691  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:17.351186  165698 cri.go:89] found id: ""
	I0617 12:04:17.351212  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.351221  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:17.351228  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:17.351287  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:17.385609  165698 cri.go:89] found id: ""
	I0617 12:04:17.385653  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.385665  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:17.385673  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:17.385741  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:17.423890  165698 cri.go:89] found id: ""
	I0617 12:04:17.423923  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.423935  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:17.423944  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:17.424000  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:17.459543  165698 cri.go:89] found id: ""
	I0617 12:04:17.459575  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.459584  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:17.459592  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:17.459660  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:17.495554  165698 cri.go:89] found id: ""
	I0617 12:04:17.495584  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.495594  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:17.495606  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:17.495632  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:17.547835  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:17.547881  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:17.562391  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:17.562422  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:17.635335  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:17.635368  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:17.635387  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:17.708946  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:17.708988  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:13.724618  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:16.224689  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:13.832533  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:15.833210  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:17.841693  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:19.336775  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:21.835598  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:20.249833  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:20.266234  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:20.266301  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:20.307380  165698 cri.go:89] found id: ""
	I0617 12:04:20.307415  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.307424  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:20.307431  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:20.307508  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:20.347193  165698 cri.go:89] found id: ""
	I0617 12:04:20.347225  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.347235  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:20.347243  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:20.347311  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:20.382673  165698 cri.go:89] found id: ""
	I0617 12:04:20.382711  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.382724  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:20.382732  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:20.382800  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:20.419542  165698 cri.go:89] found id: ""
	I0617 12:04:20.419573  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.419582  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:20.419588  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:20.419652  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:20.454586  165698 cri.go:89] found id: ""
	I0617 12:04:20.454618  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.454629  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:20.454636  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:20.454708  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:20.501094  165698 cri.go:89] found id: ""
	I0617 12:04:20.501123  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.501131  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:20.501137  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:20.501190  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:20.537472  165698 cri.go:89] found id: ""
	I0617 12:04:20.537512  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.537524  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:20.537532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:20.537597  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:20.571477  165698 cri.go:89] found id: ""
	I0617 12:04:20.571509  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.571519  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:20.571532  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:20.571550  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:20.611503  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:20.611540  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:20.663868  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:20.663905  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:20.677679  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:20.677704  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:20.753645  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:20.753663  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:20.753689  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:18.725428  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:21.224314  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:20.333214  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:22.333294  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:24.333835  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:26.335344  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:23.335535  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:23.349700  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:23.349766  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:23.384327  165698 cri.go:89] found id: ""
	I0617 12:04:23.384351  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.384358  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:23.384364  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:23.384417  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:23.427145  165698 cri.go:89] found id: ""
	I0617 12:04:23.427179  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.427190  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:23.427197  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:23.427254  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:23.461484  165698 cri.go:89] found id: ""
	I0617 12:04:23.461511  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.461522  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:23.461532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:23.461600  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:23.501292  165698 cri.go:89] found id: ""
	I0617 12:04:23.501324  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.501334  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:23.501342  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:23.501407  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:23.537605  165698 cri.go:89] found id: ""
	I0617 12:04:23.537639  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.537649  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:23.537654  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:23.537727  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:23.576580  165698 cri.go:89] found id: ""
	I0617 12:04:23.576608  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.576616  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:23.576623  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:23.576685  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:23.613124  165698 cri.go:89] found id: ""
	I0617 12:04:23.613153  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.613161  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:23.613167  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:23.613216  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:23.648662  165698 cri.go:89] found id: ""
	I0617 12:04:23.648688  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.648695  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:23.648705  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:23.648717  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:23.661737  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:23.661762  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:23.732512  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:23.732531  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:23.732547  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:23.810165  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:23.810207  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:23.855099  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:23.855136  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:26.406038  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:26.422243  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:26.422323  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:26.460959  165698 cri.go:89] found id: ""
	I0617 12:04:26.460984  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.460994  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:26.461002  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:26.461078  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:26.498324  165698 cri.go:89] found id: ""
	I0617 12:04:26.498350  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.498362  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:26.498370  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:26.498435  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:26.535299  165698 cri.go:89] found id: ""
	I0617 12:04:26.535335  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.535346  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:26.535354  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:26.535417  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:26.574623  165698 cri.go:89] found id: ""
	I0617 12:04:26.574657  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.574668  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:26.574677  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:26.574738  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:26.611576  165698 cri.go:89] found id: ""
	I0617 12:04:26.611607  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.611615  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:26.611621  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:26.611672  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:26.645664  165698 cri.go:89] found id: ""
	I0617 12:04:26.645692  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.645700  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:26.645706  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:26.645755  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:26.679442  165698 cri.go:89] found id: ""
	I0617 12:04:26.679477  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.679488  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:26.679495  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:26.679544  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:26.713512  165698 cri.go:89] found id: ""
	I0617 12:04:26.713543  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.713551  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:26.713563  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:26.713584  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:26.770823  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:26.770853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:26.784829  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:26.784858  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:26.868457  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:26.868480  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:26.868498  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:26.948522  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:26.948561  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:23.725626  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:26.224874  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:24.830639  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:26.836648  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:28.835682  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:31.335891  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:29.490891  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:29.504202  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:29.504273  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:29.544091  165698 cri.go:89] found id: ""
	I0617 12:04:29.544125  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.544137  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:29.544145  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:29.544203  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:29.581645  165698 cri.go:89] found id: ""
	I0617 12:04:29.581670  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.581679  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:29.581685  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:29.581736  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:29.621410  165698 cri.go:89] found id: ""
	I0617 12:04:29.621437  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.621447  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:29.621455  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:29.621522  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:29.659619  165698 cri.go:89] found id: ""
	I0617 12:04:29.659645  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.659654  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:29.659659  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:29.659718  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:29.698822  165698 cri.go:89] found id: ""
	I0617 12:04:29.698851  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.698859  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:29.698865  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:29.698957  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:29.741648  165698 cri.go:89] found id: ""
	I0617 12:04:29.741673  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.741680  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:29.741686  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:29.741752  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:29.777908  165698 cri.go:89] found id: ""
	I0617 12:04:29.777933  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.777941  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:29.777947  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:29.778013  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:29.812290  165698 cri.go:89] found id: ""
	I0617 12:04:29.812318  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.812328  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:29.812340  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:29.812357  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:29.857527  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:29.857552  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:29.916734  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:29.916776  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:29.930988  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:29.931013  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:30.006055  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:30.006080  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:30.006098  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:32.586549  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:32.600139  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:32.600262  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:32.641527  165698 cri.go:89] found id: ""
	I0617 12:04:32.641554  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.641570  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:32.641579  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:32.641635  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:32.687945  165698 cri.go:89] found id: ""
	I0617 12:04:32.687972  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.687981  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:32.687996  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:32.688068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:32.725586  165698 cri.go:89] found id: ""
	I0617 12:04:32.725618  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.725629  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:32.725639  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:32.725696  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:32.764042  165698 cri.go:89] found id: ""
	I0617 12:04:32.764090  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.764107  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:32.764115  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:32.764183  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:32.800132  165698 cri.go:89] found id: ""
	I0617 12:04:32.800167  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.800180  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:32.800189  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:32.800256  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:32.840313  165698 cri.go:89] found id: ""
	I0617 12:04:32.840348  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.840359  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:32.840367  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:32.840434  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:32.878041  165698 cri.go:89] found id: ""
	I0617 12:04:32.878067  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.878076  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:32.878082  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:32.878134  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:32.913904  165698 cri.go:89] found id: ""
	I0617 12:04:32.913939  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.913950  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:32.913961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:32.913974  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:04:28.725534  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:31.224885  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:29.330706  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:31.331989  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:33.337062  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:35.834807  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	W0617 12:04:32.987900  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:32.987929  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:32.987947  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:33.060919  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:33.060961  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:33.102602  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:33.102629  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:33.154112  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:33.154161  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:35.669336  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:35.682819  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:35.682907  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:35.717542  165698 cri.go:89] found id: ""
	I0617 12:04:35.717571  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.717579  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:35.717586  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:35.717646  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:35.754454  165698 cri.go:89] found id: ""
	I0617 12:04:35.754483  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.754495  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:35.754503  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:35.754566  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:35.791198  165698 cri.go:89] found id: ""
	I0617 12:04:35.791227  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.791237  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:35.791246  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:35.791309  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:35.826858  165698 cri.go:89] found id: ""
	I0617 12:04:35.826892  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.826903  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:35.826911  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:35.826974  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:35.866817  165698 cri.go:89] found id: ""
	I0617 12:04:35.866845  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.866853  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:35.866861  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:35.866909  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:35.918340  165698 cri.go:89] found id: ""
	I0617 12:04:35.918377  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.918388  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:35.918397  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:35.918466  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:35.960734  165698 cri.go:89] found id: ""
	I0617 12:04:35.960764  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.960774  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:35.960779  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:35.960841  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:36.002392  165698 cri.go:89] found id: ""
	I0617 12:04:36.002426  165698 logs.go:276] 0 containers: []
	W0617 12:04:36.002437  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:36.002449  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:36.002465  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:36.055130  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:36.055163  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:36.069181  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:36.069209  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:36.146078  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:36.146105  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:36.146120  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:36.223763  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:36.223797  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:33.723759  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:35.725954  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:38.225200  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:33.833990  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:36.332152  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:38.332570  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:37.836765  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:40.334594  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:42.336958  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:38.767375  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:38.781301  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:38.781357  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:38.821364  165698 cri.go:89] found id: ""
	I0617 12:04:38.821390  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.821400  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:38.821409  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:38.821472  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:38.860727  165698 cri.go:89] found id: ""
	I0617 12:04:38.860784  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.860796  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:38.860803  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:38.860868  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:38.902932  165698 cri.go:89] found id: ""
	I0617 12:04:38.902968  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.902992  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:38.902999  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:38.903088  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:38.940531  165698 cri.go:89] found id: ""
	I0617 12:04:38.940564  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.940576  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:38.940584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:38.940649  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:38.975751  165698 cri.go:89] found id: ""
	I0617 12:04:38.975792  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.975827  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:38.975835  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:38.975908  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:39.011156  165698 cri.go:89] found id: ""
	I0617 12:04:39.011196  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.011206  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:39.011213  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:39.011269  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:39.049266  165698 cri.go:89] found id: ""
	I0617 12:04:39.049301  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.049312  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:39.049320  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:39.049373  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:39.089392  165698 cri.go:89] found id: ""
	I0617 12:04:39.089425  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.089434  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:39.089444  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:39.089459  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:39.166585  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:39.166607  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:39.166619  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:39.241910  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:39.241950  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:39.287751  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:39.287782  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:39.342226  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:39.342259  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:41.857327  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:41.871379  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:41.871446  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:41.907435  165698 cri.go:89] found id: ""
	I0617 12:04:41.907472  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.907483  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:41.907492  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:41.907542  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:41.941684  165698 cri.go:89] found id: ""
	I0617 12:04:41.941725  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.941737  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:41.941745  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:41.941819  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:41.977359  165698 cri.go:89] found id: ""
	I0617 12:04:41.977395  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.977407  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:41.977415  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:41.977478  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:42.015689  165698 cri.go:89] found id: ""
	I0617 12:04:42.015723  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.015734  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:42.015742  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:42.015803  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:42.050600  165698 cri.go:89] found id: ""
	I0617 12:04:42.050626  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.050637  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:42.050645  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:42.050707  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:42.088174  165698 cri.go:89] found id: ""
	I0617 12:04:42.088201  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.088212  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:42.088221  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:42.088290  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:42.127335  165698 cri.go:89] found id: ""
	I0617 12:04:42.127364  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.127375  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:42.127384  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:42.127443  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:42.163435  165698 cri.go:89] found id: ""
	I0617 12:04:42.163481  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.163492  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:42.163505  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:42.163527  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:42.233233  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:42.233262  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:42.233280  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:42.311695  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:42.311741  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:42.378134  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:42.378163  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:42.439614  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:42.439647  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:40.726373  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:43.225144  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:40.336291  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:42.831220  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:44.835811  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:47.335772  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:44.953738  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:44.967822  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:44.967884  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:45.004583  165698 cri.go:89] found id: ""
	I0617 12:04:45.004687  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.004732  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:45.004741  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:45.004797  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:45.038912  165698 cri.go:89] found id: ""
	I0617 12:04:45.038939  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.038949  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:45.038957  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:45.039026  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:45.073594  165698 cri.go:89] found id: ""
	I0617 12:04:45.073620  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.073628  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:45.073634  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:45.073684  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:45.108225  165698 cri.go:89] found id: ""
	I0617 12:04:45.108253  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.108261  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:45.108267  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:45.108317  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:45.139522  165698 cri.go:89] found id: ""
	I0617 12:04:45.139545  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.139553  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:45.139559  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:45.139609  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:45.173705  165698 cri.go:89] found id: ""
	I0617 12:04:45.173735  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.173745  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:45.173752  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:45.173813  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:45.206448  165698 cri.go:89] found id: ""
	I0617 12:04:45.206477  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.206486  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:45.206493  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:45.206551  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:45.242925  165698 cri.go:89] found id: ""
	I0617 12:04:45.242952  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.242962  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:45.242981  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:45.242998  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:45.294669  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:45.294700  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:45.307642  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:45.307670  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:45.381764  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:45.381788  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:45.381805  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:45.469022  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:45.469056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:45.724236  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:48.225656  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:45.332888  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:47.832326  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:49.337260  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:51.338718  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:48.014169  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:48.029895  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:48.029984  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:48.086421  165698 cri.go:89] found id: ""
	I0617 12:04:48.086456  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.086468  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:48.086477  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:48.086554  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:48.135673  165698 cri.go:89] found id: ""
	I0617 12:04:48.135705  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.135713  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:48.135733  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:48.135808  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:48.184330  165698 cri.go:89] found id: ""
	I0617 12:04:48.184353  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.184362  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:48.184368  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:48.184418  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:48.221064  165698 cri.go:89] found id: ""
	I0617 12:04:48.221095  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.221103  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:48.221112  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:48.221175  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:48.264464  165698 cri.go:89] found id: ""
	I0617 12:04:48.264495  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.264502  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:48.264508  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:48.264561  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:48.302144  165698 cri.go:89] found id: ""
	I0617 12:04:48.302180  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.302191  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:48.302199  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:48.302263  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:48.345431  165698 cri.go:89] found id: ""
	I0617 12:04:48.345458  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.345465  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:48.345472  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:48.345539  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:48.383390  165698 cri.go:89] found id: ""
	I0617 12:04:48.383423  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.383434  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:48.383447  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:48.383478  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:48.422328  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:48.422356  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:48.473698  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:48.473735  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:48.488399  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:48.488429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:48.566851  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:48.566871  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:48.566884  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:51.149626  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:51.162855  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:51.162926  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:51.199056  165698 cri.go:89] found id: ""
	I0617 12:04:51.199091  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.199102  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:51.199109  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:51.199172  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:51.238773  165698 cri.go:89] found id: ""
	I0617 12:04:51.238810  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.238821  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:51.238827  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:51.238883  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:51.279049  165698 cri.go:89] found id: ""
	I0617 12:04:51.279079  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.279092  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:51.279100  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:51.279166  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:51.324923  165698 cri.go:89] found id: ""
	I0617 12:04:51.324957  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.324969  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:51.324976  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:51.325028  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:51.363019  165698 cri.go:89] found id: ""
	I0617 12:04:51.363055  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.363068  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:51.363077  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:51.363142  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:51.399620  165698 cri.go:89] found id: ""
	I0617 12:04:51.399652  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.399661  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:51.399675  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:51.399758  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:51.434789  165698 cri.go:89] found id: ""
	I0617 12:04:51.434824  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.434836  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:51.434844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:51.434910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:51.470113  165698 cri.go:89] found id: ""
	I0617 12:04:51.470140  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.470149  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:51.470160  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:51.470176  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:51.526138  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:51.526173  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:51.539451  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:51.539491  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:51.613418  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:51.613437  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:51.613450  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:51.691971  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:51.692010  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:50.724405  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:52.725426  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:50.332363  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:52.332932  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:53.834955  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:56.334584  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:54.234514  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:54.249636  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:54.249724  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:54.283252  165698 cri.go:89] found id: ""
	I0617 12:04:54.283287  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.283300  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:54.283307  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:54.283367  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:54.319153  165698 cri.go:89] found id: ""
	I0617 12:04:54.319207  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.319218  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:54.319226  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:54.319290  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:54.361450  165698 cri.go:89] found id: ""
	I0617 12:04:54.361480  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.361491  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:54.361498  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:54.361562  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:54.397806  165698 cri.go:89] found id: ""
	I0617 12:04:54.397834  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.397843  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:54.397849  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:54.397899  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:54.447119  165698 cri.go:89] found id: ""
	I0617 12:04:54.447147  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.447155  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:54.447161  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:54.447211  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:54.489717  165698 cri.go:89] found id: ""
	I0617 12:04:54.489751  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.489760  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:54.489766  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:54.489830  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:54.532840  165698 cri.go:89] found id: ""
	I0617 12:04:54.532943  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.532975  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:54.532989  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:54.533100  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:54.568227  165698 cri.go:89] found id: ""
	I0617 12:04:54.568369  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.568391  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:54.568403  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:54.568420  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:54.583140  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:54.583174  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:54.661258  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:54.661281  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:54.661296  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:54.750472  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:54.750511  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:54.797438  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:54.797467  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:57.349800  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:57.364820  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:57.364879  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:57.405065  165698 cri.go:89] found id: ""
	I0617 12:04:57.405093  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.405101  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:57.405106  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:57.405153  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:57.445707  165698 cri.go:89] found id: ""
	I0617 12:04:57.445741  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.445752  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:57.445760  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:57.445829  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:57.486911  165698 cri.go:89] found id: ""
	I0617 12:04:57.486940  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.486948  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:57.486955  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:57.487014  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:57.521218  165698 cri.go:89] found id: ""
	I0617 12:04:57.521254  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.521266  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:57.521274  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:57.521342  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:57.555762  165698 cri.go:89] found id: ""
	I0617 12:04:57.555794  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.555803  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:57.555808  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:57.555863  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:57.591914  165698 cri.go:89] found id: ""
	I0617 12:04:57.591945  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.591956  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:57.591971  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:57.592037  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:57.626435  165698 cri.go:89] found id: ""
	I0617 12:04:57.626463  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.626471  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:57.626477  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:57.626527  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:57.665088  165698 cri.go:89] found id: ""
	I0617 12:04:57.665118  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.665126  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:57.665137  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:57.665152  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:57.716284  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:57.716316  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:57.730179  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:57.730204  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:57.808904  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:57.808933  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:57.808954  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:57.894499  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:57.894530  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:55.224507  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:57.224583  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:54.831112  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:56.832477  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:58.334640  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:00.335137  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:00.435957  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:00.450812  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:00.450890  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:00.491404  165698 cri.go:89] found id: ""
	I0617 12:05:00.491432  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.491440  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:00.491446  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:00.491523  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:00.526711  165698 cri.go:89] found id: ""
	I0617 12:05:00.526739  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.526747  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:00.526753  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:00.526817  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:00.562202  165698 cri.go:89] found id: ""
	I0617 12:05:00.562236  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.562246  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:00.562255  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:00.562323  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:00.602754  165698 cri.go:89] found id: ""
	I0617 12:05:00.602790  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.602802  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:00.602811  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:00.602877  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:00.645666  165698 cri.go:89] found id: ""
	I0617 12:05:00.645703  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.645715  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:00.645723  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:00.645788  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:00.684649  165698 cri.go:89] found id: ""
	I0617 12:05:00.684685  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.684694  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:00.684701  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:00.684784  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:00.727139  165698 cri.go:89] found id: ""
	I0617 12:05:00.727160  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.727167  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:00.727173  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:00.727238  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:00.764401  165698 cri.go:89] found id: ""
	I0617 12:05:00.764433  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.764444  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:00.764455  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:00.764474  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:00.777301  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:00.777322  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:00.849752  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:00.849778  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:00.849795  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:00.930220  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:00.930266  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:00.970076  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:00.970116  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:59.226429  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:01.725079  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:59.337081  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:01.834932  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:02.834132  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:05.334066  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:07.335366  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:03.526070  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:03.541150  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:03.541229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:03.584416  165698 cri.go:89] found id: ""
	I0617 12:05:03.584451  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.584463  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:03.584472  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:03.584535  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:03.623509  165698 cri.go:89] found id: ""
	I0617 12:05:03.623543  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.623552  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:03.623558  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:03.623611  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:03.661729  165698 cri.go:89] found id: ""
	I0617 12:05:03.661765  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.661778  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:03.661787  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:03.661852  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:03.702952  165698 cri.go:89] found id: ""
	I0617 12:05:03.702985  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.703008  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:03.703033  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:03.703100  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:03.746534  165698 cri.go:89] found id: ""
	I0617 12:05:03.746570  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.746578  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:03.746584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:03.746648  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:03.784472  165698 cri.go:89] found id: ""
	I0617 12:05:03.784506  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.784515  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:03.784522  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:03.784580  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:03.821033  165698 cri.go:89] found id: ""
	I0617 12:05:03.821066  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.821077  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:03.821085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:03.821146  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:03.859438  165698 cri.go:89] found id: ""
	I0617 12:05:03.859474  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.859487  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:03.859497  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:03.859513  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:03.940723  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:03.940770  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:03.986267  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:03.986303  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:04.037999  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:04.038039  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:04.051382  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:04.051415  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:04.121593  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:06.622475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:06.636761  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:06.636842  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:06.673954  165698 cri.go:89] found id: ""
	I0617 12:05:06.673995  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.674007  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:06.674015  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:06.674084  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:06.708006  165698 cri.go:89] found id: ""
	I0617 12:05:06.708037  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.708047  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:06.708055  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:06.708124  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:06.743819  165698 cri.go:89] found id: ""
	I0617 12:05:06.743852  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.743864  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:06.743872  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:06.743934  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:06.781429  165698 cri.go:89] found id: ""
	I0617 12:05:06.781457  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.781465  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:06.781473  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:06.781540  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:06.818404  165698 cri.go:89] found id: ""
	I0617 12:05:06.818435  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.818447  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:06.818456  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:06.818516  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:06.857880  165698 cri.go:89] found id: ""
	I0617 12:05:06.857913  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.857924  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:06.857933  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:06.857993  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:06.893010  165698 cri.go:89] found id: ""
	I0617 12:05:06.893050  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.893059  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:06.893065  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:06.893118  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:06.926302  165698 cri.go:89] found id: ""
	I0617 12:05:06.926336  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.926347  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:06.926360  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:06.926378  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:06.997173  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:06.997197  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:06.997215  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:07.082843  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:07.082885  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:07.122542  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:07.122572  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:07.177033  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:07.177070  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:03.725338  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:06.225466  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:04.331639  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:06.331988  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:08.332139  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:09.835119  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:12.333346  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:09.693217  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:09.707043  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:09.707110  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:09.742892  165698 cri.go:89] found id: ""
	I0617 12:05:09.742918  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.742927  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:09.742933  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:09.742982  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:09.776938  165698 cri.go:89] found id: ""
	I0617 12:05:09.776969  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.776976  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:09.776982  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:09.777030  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:09.813613  165698 cri.go:89] found id: ""
	I0617 12:05:09.813643  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.813651  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:09.813658  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:09.813705  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:09.855483  165698 cri.go:89] found id: ""
	I0617 12:05:09.855516  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.855525  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:09.855532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:09.855596  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:09.890808  165698 cri.go:89] found id: ""
	I0617 12:05:09.890844  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.890854  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:09.890862  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:09.890930  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:09.927656  165698 cri.go:89] found id: ""
	I0617 12:05:09.927684  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.927693  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:09.927703  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:09.927758  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:09.968130  165698 cri.go:89] found id: ""
	I0617 12:05:09.968163  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.968174  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:09.968183  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:09.968239  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:10.010197  165698 cri.go:89] found id: ""
	I0617 12:05:10.010220  165698 logs.go:276] 0 containers: []
	W0617 12:05:10.010228  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:10.010239  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:10.010252  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:10.063999  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:10.064040  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:10.078837  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:10.078873  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:10.155932  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:10.155954  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:10.155967  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:10.232859  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:10.232901  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:12.772943  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:12.787936  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:12.788024  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:12.828457  165698 cri.go:89] found id: ""
	I0617 12:05:12.828483  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.828491  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:12.828498  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:12.828562  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:12.862265  165698 cri.go:89] found id: ""
	I0617 12:05:12.862296  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.862306  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:12.862313  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:12.862372  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:12.899673  165698 cri.go:89] found id: ""
	I0617 12:05:12.899698  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.899706  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:12.899712  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:12.899759  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:12.943132  165698 cri.go:89] found id: ""
	I0617 12:05:12.943161  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.943169  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:12.943175  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:12.943227  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:08.724369  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:10.725166  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:13.224799  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:10.333769  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:12.832493  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:14.336437  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:16.835155  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:12.985651  165698 cri.go:89] found id: ""
	I0617 12:05:12.985677  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.985685  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:12.985691  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:12.985747  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:13.021484  165698 cri.go:89] found id: ""
	I0617 12:05:13.021508  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.021516  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:13.021522  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:13.021569  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:13.060658  165698 cri.go:89] found id: ""
	I0617 12:05:13.060689  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.060705  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:13.060713  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:13.060782  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:13.106008  165698 cri.go:89] found id: ""
	I0617 12:05:13.106041  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.106052  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:13.106066  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:13.106083  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:13.160199  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:13.160231  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:13.173767  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:13.173804  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:13.245358  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:13.245383  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:13.245399  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:13.323046  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:13.323085  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:15.872024  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:15.885550  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:15.885624  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:15.920303  165698 cri.go:89] found id: ""
	I0617 12:05:15.920332  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.920344  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:15.920358  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:15.920423  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:15.955132  165698 cri.go:89] found id: ""
	I0617 12:05:15.955158  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.955166  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:15.955172  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:15.955220  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:15.992995  165698 cri.go:89] found id: ""
	I0617 12:05:15.993034  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.993053  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:15.993060  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:15.993127  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:16.032603  165698 cri.go:89] found id: ""
	I0617 12:05:16.032638  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.032650  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:16.032658  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:16.032716  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:16.071770  165698 cri.go:89] found id: ""
	I0617 12:05:16.071804  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.071816  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:16.071824  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:16.071899  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:16.106172  165698 cri.go:89] found id: ""
	I0617 12:05:16.106206  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.106218  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:16.106226  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:16.106292  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:16.139406  165698 cri.go:89] found id: ""
	I0617 12:05:16.139436  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.139443  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:16.139449  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:16.139517  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:16.174513  165698 cri.go:89] found id: ""
	I0617 12:05:16.174554  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.174565  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:16.174580  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:16.174597  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:16.240912  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:16.240940  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:16.240958  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:16.323853  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:16.323891  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:16.372632  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:16.372659  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:16.428367  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:16.428406  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:15.224918  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:17.725226  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:15.332512  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:17.833710  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:19.334324  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:21.334654  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:18.943551  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:18.957394  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:18.957490  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:18.991967  165698 cri.go:89] found id: ""
	I0617 12:05:18.992006  165698 logs.go:276] 0 containers: []
	W0617 12:05:18.992017  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:18.992027  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:18.992092  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:19.025732  165698 cri.go:89] found id: ""
	I0617 12:05:19.025763  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.025775  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:19.025783  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:19.025856  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:19.061786  165698 cri.go:89] found id: ""
	I0617 12:05:19.061820  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.061830  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:19.061838  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:19.061906  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:19.098819  165698 cri.go:89] found id: ""
	I0617 12:05:19.098856  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.098868  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:19.098876  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:19.098947  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:19.139840  165698 cri.go:89] found id: ""
	I0617 12:05:19.139877  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.139886  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:19.139894  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:19.139965  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:19.176546  165698 cri.go:89] found id: ""
	I0617 12:05:19.176578  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.176590  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:19.176598  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:19.176671  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:19.209948  165698 cri.go:89] found id: ""
	I0617 12:05:19.209985  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.209997  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:19.210005  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:19.210087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:19.246751  165698 cri.go:89] found id: ""
	I0617 12:05:19.246788  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.246799  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:19.246812  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:19.246830  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:19.322272  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:19.322316  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:19.370147  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:19.370187  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:19.422699  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:19.422749  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:19.437255  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:19.437284  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:19.510077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:22.010840  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:22.024791  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:22.024879  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:22.060618  165698 cri.go:89] found id: ""
	I0617 12:05:22.060658  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.060667  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:22.060674  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:22.060742  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:22.100228  165698 cri.go:89] found id: ""
	I0617 12:05:22.100259  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.100268  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:22.100274  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:22.100343  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:22.135629  165698 cri.go:89] found id: ""
	I0617 12:05:22.135657  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.135665  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:22.135671  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:22.135730  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:22.186027  165698 cri.go:89] found id: ""
	I0617 12:05:22.186064  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.186076  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:22.186085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:22.186148  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:22.220991  165698 cri.go:89] found id: ""
	I0617 12:05:22.221019  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.221029  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:22.221037  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:22.221104  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:22.266306  165698 cri.go:89] found id: ""
	I0617 12:05:22.266337  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.266348  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:22.266357  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:22.266414  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:22.303070  165698 cri.go:89] found id: ""
	I0617 12:05:22.303104  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.303116  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:22.303124  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:22.303190  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:22.339792  165698 cri.go:89] found id: ""
	I0617 12:05:22.339819  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.339829  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:22.339840  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:22.339856  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:22.422360  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:22.422397  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:22.465744  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:22.465777  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:22.516199  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:22.516232  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:22.529961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:22.529983  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:22.601519  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:20.225369  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:22.226699  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:19.834562  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:21.837426  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:23.336540  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:25.835706  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:25.102655  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:25.116893  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:25.116959  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:25.156370  165698 cri.go:89] found id: ""
	I0617 12:05:25.156396  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.156404  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:25.156410  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:25.156468  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:25.193123  165698 cri.go:89] found id: ""
	I0617 12:05:25.193199  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.193221  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:25.193234  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:25.193301  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:25.232182  165698 cri.go:89] found id: ""
	I0617 12:05:25.232209  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.232219  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:25.232227  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:25.232285  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:25.266599  165698 cri.go:89] found id: ""
	I0617 12:05:25.266630  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.266639  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:25.266645  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:25.266701  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:25.308732  165698 cri.go:89] found id: ""
	I0617 12:05:25.308762  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.308770  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:25.308776  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:25.308836  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:25.348817  165698 cri.go:89] found id: ""
	I0617 12:05:25.348858  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.348871  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:25.348879  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:25.348946  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:25.389343  165698 cri.go:89] found id: ""
	I0617 12:05:25.389375  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.389387  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:25.389393  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:25.389452  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:25.427014  165698 cri.go:89] found id: ""
	I0617 12:05:25.427043  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.427055  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:25.427067  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:25.427083  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:25.441361  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:25.441390  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:25.518967  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:25.518993  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:25.519006  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:25.601411  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:25.601450  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:25.651636  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:25.651674  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:24.725515  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:27.223821  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:24.333548  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:26.832428  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:27.836661  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:30.334313  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:32.336489  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:28.202148  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:28.215710  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:28.215792  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:28.254961  165698 cri.go:89] found id: ""
	I0617 12:05:28.254986  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.255000  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:28.255007  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:28.255061  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:28.292574  165698 cri.go:89] found id: ""
	I0617 12:05:28.292606  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.292614  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:28.292620  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:28.292683  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:28.329036  165698 cri.go:89] found id: ""
	I0617 12:05:28.329067  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.329077  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:28.329085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:28.329152  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:28.366171  165698 cri.go:89] found id: ""
	I0617 12:05:28.366197  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.366206  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:28.366212  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:28.366273  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:28.401380  165698 cri.go:89] found id: ""
	I0617 12:05:28.401407  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.401417  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:28.401424  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:28.401486  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:28.438767  165698 cri.go:89] found id: ""
	I0617 12:05:28.438798  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.438810  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:28.438817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:28.438876  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:28.472706  165698 cri.go:89] found id: ""
	I0617 12:05:28.472761  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.472772  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:28.472779  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:28.472829  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:28.509525  165698 cri.go:89] found id: ""
	I0617 12:05:28.509548  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.509556  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:28.509565  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:28.509577  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:28.606008  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:28.606059  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:28.665846  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:28.665874  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:28.721599  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:28.721627  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:28.735040  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:28.735062  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:28.811954  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:31.312554  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:31.326825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:31.326905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:31.364862  165698 cri.go:89] found id: ""
	I0617 12:05:31.364891  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.364902  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:31.364910  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:31.364976  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:31.396979  165698 cri.go:89] found id: ""
	I0617 12:05:31.397013  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.397027  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:31.397035  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:31.397098  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:31.430617  165698 cri.go:89] found id: ""
	I0617 12:05:31.430647  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.430657  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:31.430665  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:31.430728  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:31.462308  165698 cri.go:89] found id: ""
	I0617 12:05:31.462338  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.462345  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:31.462350  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:31.462399  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:31.495406  165698 cri.go:89] found id: ""
	I0617 12:05:31.495435  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.495444  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:31.495452  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:31.495553  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:31.538702  165698 cri.go:89] found id: ""
	I0617 12:05:31.538729  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.538739  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:31.538750  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:31.538813  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:31.572637  165698 cri.go:89] found id: ""
	I0617 12:05:31.572666  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.572677  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:31.572685  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:31.572745  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:31.609307  165698 cri.go:89] found id: ""
	I0617 12:05:31.609341  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.609352  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:31.609364  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:31.609380  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:31.622445  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:31.622471  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:31.699170  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:31.699191  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:31.699209  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:31.775115  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:31.775156  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:31.815836  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:31.815866  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:29.225028  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:31.727009  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:29.333400  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:31.834599  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:34.836093  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:37.335140  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:34.372097  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:34.393542  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:34.393607  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:34.437265  165698 cri.go:89] found id: ""
	I0617 12:05:34.437294  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.437305  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:34.437314  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:34.437382  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:34.474566  165698 cri.go:89] found id: ""
	I0617 12:05:34.474596  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.474609  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:34.474617  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:34.474680  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:34.510943  165698 cri.go:89] found id: ""
	I0617 12:05:34.510975  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.510986  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:34.511000  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:34.511072  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:34.548124  165698 cri.go:89] found id: ""
	I0617 12:05:34.548160  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.548172  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:34.548179  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:34.548241  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:34.582428  165698 cri.go:89] found id: ""
	I0617 12:05:34.582453  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.582460  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:34.582467  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:34.582514  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:34.616895  165698 cri.go:89] found id: ""
	I0617 12:05:34.616937  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.616950  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:34.616957  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:34.617019  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:34.656116  165698 cri.go:89] found id: ""
	I0617 12:05:34.656144  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.656155  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:34.656162  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:34.656226  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:34.695649  165698 cri.go:89] found id: ""
	I0617 12:05:34.695680  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.695692  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:34.695705  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:34.695722  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:34.747910  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:34.747956  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:34.762177  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:34.762206  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:34.840395  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:34.840423  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:34.840440  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:34.922962  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:34.923002  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:37.464659  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:37.480351  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:37.480416  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:37.521249  165698 cri.go:89] found id: ""
	I0617 12:05:37.521279  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.521286  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:37.521293  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:37.521340  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:37.561053  165698 cri.go:89] found id: ""
	I0617 12:05:37.561079  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.561087  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:37.561094  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:37.561151  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:37.599019  165698 cri.go:89] found id: ""
	I0617 12:05:37.599057  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.599066  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:37.599074  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:37.599134  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:37.638276  165698 cri.go:89] found id: ""
	I0617 12:05:37.638304  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.638315  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:37.638323  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:37.638389  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:37.677819  165698 cri.go:89] found id: ""
	I0617 12:05:37.677845  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.677853  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:37.677859  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:37.677910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:37.715850  165698 cri.go:89] found id: ""
	I0617 12:05:37.715877  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.715888  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:37.715897  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:37.715962  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:37.755533  165698 cri.go:89] found id: ""
	I0617 12:05:37.755563  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.755570  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:37.755576  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:37.755636  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:37.791826  165698 cri.go:89] found id: ""
	I0617 12:05:37.791850  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.791859  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:37.791872  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:37.791888  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:37.844824  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:37.844853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:37.860933  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:37.860963  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:37.926497  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:37.926519  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:37.926535  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:34.224078  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:36.224464  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:38.224753  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:34.333888  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:36.832374  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:39.336299  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:41.834494  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:38.003814  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:38.003853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:40.546386  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:40.560818  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:40.560896  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:40.596737  165698 cri.go:89] found id: ""
	I0617 12:05:40.596777  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.596784  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:40.596791  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:40.596844  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:40.631518  165698 cri.go:89] found id: ""
	I0617 12:05:40.631556  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.631570  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:40.631611  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:40.631683  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:40.674962  165698 cri.go:89] found id: ""
	I0617 12:05:40.674997  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.675006  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:40.675012  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:40.675064  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:40.716181  165698 cri.go:89] found id: ""
	I0617 12:05:40.716210  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.716218  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:40.716226  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:40.716286  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:40.756312  165698 cri.go:89] found id: ""
	I0617 12:05:40.756339  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.756348  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:40.756353  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:40.756406  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:40.791678  165698 cri.go:89] found id: ""
	I0617 12:05:40.791733  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.791750  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:40.791759  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:40.791830  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:40.830717  165698 cri.go:89] found id: ""
	I0617 12:05:40.830754  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.830766  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:40.830774  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:40.830854  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:40.868139  165698 cri.go:89] found id: ""
	I0617 12:05:40.868169  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.868178  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:40.868198  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:40.868224  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:40.920319  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:40.920353  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:40.934948  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:40.934974  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:41.005349  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:41.005371  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:41.005388  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:41.086783  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:41.086842  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:40.724767  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:43.223836  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:38.834031  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:41.331190  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:43.332593  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:44.334114  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:46.334595  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:43.625515  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:43.638942  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:43.639019  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:43.673703  165698 cri.go:89] found id: ""
	I0617 12:05:43.673735  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.673747  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:43.673756  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:43.673822  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:43.709417  165698 cri.go:89] found id: ""
	I0617 12:05:43.709449  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.709460  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:43.709468  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:43.709529  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:43.742335  165698 cri.go:89] found id: ""
	I0617 12:05:43.742368  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.742379  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:43.742389  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:43.742449  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:43.779112  165698 cri.go:89] found id: ""
	I0617 12:05:43.779141  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.779150  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:43.779155  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:43.779219  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:43.813362  165698 cri.go:89] found id: ""
	I0617 12:05:43.813397  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.813406  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:43.813414  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:43.813464  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:43.850456  165698 cri.go:89] found id: ""
	I0617 12:05:43.850484  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.850493  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:43.850499  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:43.850547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:43.884527  165698 cri.go:89] found id: ""
	I0617 12:05:43.884555  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.884564  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:43.884571  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:43.884632  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:43.921440  165698 cri.go:89] found id: ""
	I0617 12:05:43.921476  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.921488  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:43.921501  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:43.921517  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:43.973687  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:43.973727  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:43.988114  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:43.988143  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:44.055084  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:44.055119  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:44.055138  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:44.134628  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:44.134665  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:46.677852  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:46.690688  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:46.690747  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:46.724055  165698 cri.go:89] found id: ""
	I0617 12:05:46.724090  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.724101  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:46.724110  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:46.724171  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:46.759119  165698 cri.go:89] found id: ""
	I0617 12:05:46.759150  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.759161  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:46.759169  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:46.759227  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:46.796392  165698 cri.go:89] found id: ""
	I0617 12:05:46.796424  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.796435  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:46.796442  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:46.796504  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:46.831727  165698 cri.go:89] found id: ""
	I0617 12:05:46.831761  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.831770  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:46.831777  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:46.831845  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:46.866662  165698 cri.go:89] found id: ""
	I0617 12:05:46.866693  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.866702  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:46.866708  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:46.866757  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:46.905045  165698 cri.go:89] found id: ""
	I0617 12:05:46.905070  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.905078  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:46.905084  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:46.905130  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:46.940879  165698 cri.go:89] found id: ""
	I0617 12:05:46.940907  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.940915  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:46.940926  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:46.940974  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:46.977247  165698 cri.go:89] found id: ""
	I0617 12:05:46.977290  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.977301  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:46.977314  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:46.977331  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:47.046094  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:47.046116  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:47.046133  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:47.122994  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:47.123038  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:47.166273  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:47.166313  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:47.221392  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:47.221429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:45.228807  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:47.723584  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:45.834805  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:48.333121  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:48.335758  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:50.833989  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:49.739113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:49.752880  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:49.753004  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:49.791177  165698 cri.go:89] found id: ""
	I0617 12:05:49.791218  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.791242  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:49.791251  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:49.791322  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:49.831602  165698 cri.go:89] found id: ""
	I0617 12:05:49.831633  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.831644  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:49.831652  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:49.831719  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:49.870962  165698 cri.go:89] found id: ""
	I0617 12:05:49.870998  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.871011  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:49.871019  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:49.871092  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:49.917197  165698 cri.go:89] found id: ""
	I0617 12:05:49.917232  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.917243  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:49.917252  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:49.917320  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:49.952997  165698 cri.go:89] found id: ""
	I0617 12:05:49.953034  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.953047  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:49.953056  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:49.953114  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:50.001925  165698 cri.go:89] found id: ""
	I0617 12:05:50.001965  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.001977  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:50.001986  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:50.002059  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:50.043374  165698 cri.go:89] found id: ""
	I0617 12:05:50.043403  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.043412  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:50.043419  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:50.043496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:50.082974  165698 cri.go:89] found id: ""
	I0617 12:05:50.083009  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.083020  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:50.083029  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:50.083043  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:50.134116  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:50.134159  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:50.148478  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:50.148511  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:50.227254  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:50.227276  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:50.227288  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:50.305920  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:50.305960  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:52.848811  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:52.862612  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:52.862669  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:52.896379  165698 cri.go:89] found id: ""
	I0617 12:05:52.896410  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.896421  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:52.896429  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:52.896488  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:52.933387  165698 cri.go:89] found id: ""
	I0617 12:05:52.933422  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.933432  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:52.933439  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:52.933501  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:52.971055  165698 cri.go:89] found id: ""
	I0617 12:05:52.971091  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.971102  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:52.971110  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:52.971168  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:49.724816  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:52.224660  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:50.334092  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:52.831686  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:52.835473  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:55.334017  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:57.334957  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:53.003815  165698 cri.go:89] found id: ""
	I0617 12:05:53.003846  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.003857  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:53.003864  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:53.003927  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:53.039133  165698 cri.go:89] found id: ""
	I0617 12:05:53.039161  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.039169  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:53.039175  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:53.039229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:53.077703  165698 cri.go:89] found id: ""
	I0617 12:05:53.077756  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.077773  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:53.077780  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:53.077831  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:53.119187  165698 cri.go:89] found id: ""
	I0617 12:05:53.119216  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.119223  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:53.119230  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:53.119287  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:53.154423  165698 cri.go:89] found id: ""
	I0617 12:05:53.154457  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.154467  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:53.154480  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:53.154496  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:53.202745  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:53.202778  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:53.216510  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:53.216537  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:53.295687  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:53.295712  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:53.295732  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:53.375064  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:53.375095  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:55.915113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:55.929155  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:55.929239  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:55.964589  165698 cri.go:89] found id: ""
	I0617 12:05:55.964625  165698 logs.go:276] 0 containers: []
	W0617 12:05:55.964634  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:55.964640  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:55.964702  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:56.003659  165698 cri.go:89] found id: ""
	I0617 12:05:56.003691  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.003701  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:56.003709  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:56.003778  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:56.039674  165698 cri.go:89] found id: ""
	I0617 12:05:56.039707  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.039717  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:56.039724  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:56.039786  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:56.077695  165698 cri.go:89] found id: ""
	I0617 12:05:56.077736  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.077748  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:56.077756  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:56.077826  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:56.116397  165698 cri.go:89] found id: ""
	I0617 12:05:56.116430  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.116442  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:56.116451  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:56.116512  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:56.152395  165698 cri.go:89] found id: ""
	I0617 12:05:56.152433  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.152445  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:56.152454  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:56.152513  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:56.189740  165698 cri.go:89] found id: ""
	I0617 12:05:56.189776  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.189788  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:56.189796  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:56.189866  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:56.228017  165698 cri.go:89] found id: ""
	I0617 12:05:56.228047  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.228055  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:56.228063  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:56.228076  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:56.279032  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:56.279079  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:56.294369  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:56.294394  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:56.369507  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:56.369535  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:56.369551  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:56.454797  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:56.454833  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:54.725303  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:56.726247  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:56.726280  165060 pod_ready.go:81] duration metric: took 4m0.008373114s for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	E0617 12:05:56.726291  165060 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0617 12:05:56.726298  165060 pod_ready.go:38] duration metric: took 4m3.608691328s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:05:56.726315  165060 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:05:56.726352  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:56.726411  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:56.784765  165060 cri.go:89] found id: "5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:05:56.784792  165060 cri.go:89] found id: ""
	I0617 12:05:56.784803  165060 logs.go:276] 1 containers: [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3]
	I0617 12:05:56.784865  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.791125  165060 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:56.791189  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:56.830691  165060 cri.go:89] found id: "fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:05:56.830715  165060 cri.go:89] found id: ""
	I0617 12:05:56.830725  165060 logs.go:276] 1 containers: [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9]
	I0617 12:05:56.830785  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.836214  165060 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:56.836282  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:56.875812  165060 cri.go:89] found id: "c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:05:56.875830  165060 cri.go:89] found id: ""
	I0617 12:05:56.875837  165060 logs.go:276] 1 containers: [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7]
	I0617 12:05:56.875891  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.880190  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:56.880247  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:56.925155  165060 cri.go:89] found id: "157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:05:56.925178  165060 cri.go:89] found id: ""
	I0617 12:05:56.925186  165060 logs.go:276] 1 containers: [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d]
	I0617 12:05:56.925231  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.930317  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:56.930384  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:56.972479  165060 cri.go:89] found id: "c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:05:56.972503  165060 cri.go:89] found id: ""
	I0617 12:05:56.972512  165060 logs.go:276] 1 containers: [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d]
	I0617 12:05:56.972559  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.977635  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:56.977696  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:57.012791  165060 cri.go:89] found id: "2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:05:57.012816  165060 cri.go:89] found id: ""
	I0617 12:05:57.012826  165060 logs.go:276] 1 containers: [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079]
	I0617 12:05:57.012882  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:57.016856  165060 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:57.016909  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:57.052111  165060 cri.go:89] found id: ""
	I0617 12:05:57.052146  165060 logs.go:276] 0 containers: []
	W0617 12:05:57.052156  165060 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:57.052163  165060 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:05:57.052211  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:05:57.094600  165060 cri.go:89] found id: "02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:05:57.094619  165060 cri.go:89] found id: "7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:05:57.094622  165060 cri.go:89] found id: ""
	I0617 12:05:57.094630  165060 logs.go:276] 2 containers: [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36]
	I0617 12:05:57.094700  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:57.099250  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:57.104252  165060 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:57.104281  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:57.162000  165060 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:57.162027  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:05:57.285448  165060 logs.go:123] Gathering logs for etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] ...
	I0617 12:05:57.285490  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:05:57.340781  165060 logs.go:123] Gathering logs for coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] ...
	I0617 12:05:57.340820  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:05:57.383507  165060 logs.go:123] Gathering logs for kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] ...
	I0617 12:05:57.383540  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:05:57.428747  165060 logs.go:123] Gathering logs for kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] ...
	I0617 12:05:57.428792  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:05:57.468739  165060 logs.go:123] Gathering logs for kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] ...
	I0617 12:05:57.468770  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:05:57.531317  165060 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:57.531355  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:58.063787  165060 logs.go:123] Gathering logs for container status ...
	I0617 12:05:58.063838  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:58.129384  165060 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:58.129416  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:58.144078  165060 logs.go:123] Gathering logs for kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] ...
	I0617 12:05:58.144152  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:05:58.189028  165060 logs.go:123] Gathering logs for storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] ...
	I0617 12:05:58.189068  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:05:58.227144  165060 logs.go:123] Gathering logs for storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] ...
	I0617 12:05:58.227178  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:05:54.838580  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:57.333884  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:59.836198  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:01.837155  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:58.995221  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:59.008481  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:59.008555  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:59.043854  165698 cri.go:89] found id: ""
	I0617 12:05:59.043887  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.043914  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:59.043935  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:59.044003  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:59.081488  165698 cri.go:89] found id: ""
	I0617 12:05:59.081522  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.081530  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:59.081537  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:59.081596  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:59.118193  165698 cri.go:89] found id: ""
	I0617 12:05:59.118222  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.118232  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:59.118240  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:59.118306  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:59.150286  165698 cri.go:89] found id: ""
	I0617 12:05:59.150315  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.150327  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:59.150335  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:59.150381  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:59.191426  165698 cri.go:89] found id: ""
	I0617 12:05:59.191450  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.191485  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:59.191493  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:59.191547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:59.224933  165698 cri.go:89] found id: ""
	I0617 12:05:59.224965  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.224974  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:59.224998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:59.225061  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:59.255929  165698 cri.go:89] found id: ""
	I0617 12:05:59.255956  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.255965  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:59.255971  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:59.256025  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:59.293072  165698 cri.go:89] found id: ""
	I0617 12:05:59.293097  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.293104  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:59.293114  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:59.293126  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:59.354240  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:59.354267  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:59.367715  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:59.367744  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:59.446352  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:59.446381  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:59.446396  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:59.528701  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:59.528738  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:02.071616  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:02.088050  165698 kubeadm.go:591] duration metric: took 4m3.493743262s to restartPrimaryControlPlane
	W0617 12:06:02.088159  165698 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0617 12:06:02.088194  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:06:02.552133  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:02.570136  165698 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:06:02.582299  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:06:02.594775  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:06:02.594809  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:06:02.594867  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:06:02.605875  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:06:02.605954  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:06:02.617780  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:06:02.628284  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:06:02.628359  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:06:02.639128  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:06:02.650079  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:06:02.650144  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:06:02.660879  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:06:02.671170  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:06:02.671249  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:06:02.682071  165698 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:06:02.753750  165698 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0617 12:06:02.753913  165698 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:06:02.897384  165698 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:06:02.897530  165698 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:06:02.897685  165698 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:06:03.079116  165698 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:06:00.764533  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:00.781564  165060 api_server.go:72] duration metric: took 4m14.875617542s to wait for apiserver process to appear ...
	I0617 12:06:00.781593  165060 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:06:00.781642  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:00.781706  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:00.817980  165060 cri.go:89] found id: "5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:00.818013  165060 cri.go:89] found id: ""
	I0617 12:06:00.818024  165060 logs.go:276] 1 containers: [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3]
	I0617 12:06:00.818080  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.822664  165060 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:00.822759  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:00.861518  165060 cri.go:89] found id: "fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:00.861545  165060 cri.go:89] found id: ""
	I0617 12:06:00.861556  165060 logs.go:276] 1 containers: [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9]
	I0617 12:06:00.861614  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.865885  165060 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:00.865973  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:00.900844  165060 cri.go:89] found id: "c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:00.900864  165060 cri.go:89] found id: ""
	I0617 12:06:00.900875  165060 logs.go:276] 1 containers: [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7]
	I0617 12:06:00.900930  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.905253  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:00.905317  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:00.938998  165060 cri.go:89] found id: "157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:00.939036  165060 cri.go:89] found id: ""
	I0617 12:06:00.939046  165060 logs.go:276] 1 containers: [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d]
	I0617 12:06:00.939114  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.943170  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:00.943234  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:00.982923  165060 cri.go:89] found id: "c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:00.982953  165060 cri.go:89] found id: ""
	I0617 12:06:00.982964  165060 logs.go:276] 1 containers: [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d]
	I0617 12:06:00.983034  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.987696  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:00.987769  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:01.033789  165060 cri.go:89] found id: "2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:06:01.033825  165060 cri.go:89] found id: ""
	I0617 12:06:01.033837  165060 logs.go:276] 1 containers: [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079]
	I0617 12:06:01.033901  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:01.038800  165060 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:01.038861  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:01.077797  165060 cri.go:89] found id: ""
	I0617 12:06:01.077834  165060 logs.go:276] 0 containers: []
	W0617 12:06:01.077846  165060 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:01.077855  165060 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:01.077916  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:01.116275  165060 cri.go:89] found id: "02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:01.116296  165060 cri.go:89] found id: "7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:01.116303  165060 cri.go:89] found id: ""
	I0617 12:06:01.116311  165060 logs.go:276] 2 containers: [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36]
	I0617 12:06:01.116365  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:01.121088  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:01.125393  165060 logs.go:123] Gathering logs for container status ...
	I0617 12:06:01.125417  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:01.170817  165060 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:01.170844  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:01.223072  165060 logs.go:123] Gathering logs for kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] ...
	I0617 12:06:01.223114  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:01.269212  165060 logs.go:123] Gathering logs for kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] ...
	I0617 12:06:01.269245  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:01.313518  165060 logs.go:123] Gathering logs for kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] ...
	I0617 12:06:01.313557  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:01.357935  165060 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:01.357965  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:01.784493  165060 logs.go:123] Gathering logs for storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] ...
	I0617 12:06:01.784542  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:01.825824  165060 logs.go:123] Gathering logs for storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] ...
	I0617 12:06:01.825851  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:01.866216  165060 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:01.866252  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:01.881292  165060 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:01.881316  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:02.000026  165060 logs.go:123] Gathering logs for etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] ...
	I0617 12:06:02.000063  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:02.043491  165060 logs.go:123] Gathering logs for coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] ...
	I0617 12:06:02.043524  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:02.081957  165060 logs.go:123] Gathering logs for kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] ...
	I0617 12:06:02.081984  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:05:59.835769  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:02.332739  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:03.080903  165698 out.go:204]   - Generating certificates and keys ...
	I0617 12:06:03.081006  165698 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:06:03.081080  165698 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:06:03.081168  165698 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:06:03.081250  165698 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:06:03.081377  165698 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:06:03.081457  165698 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:06:03.082418  165698 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:06:03.083003  165698 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:06:03.083917  165698 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:06:03.084820  165698 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:06:03.085224  165698 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:06:03.085307  165698 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:06:03.203342  165698 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:06:03.430428  165698 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:06:03.570422  165698 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:06:03.772092  165698 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:06:03.793105  165698 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:06:03.793206  165698 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:06:03.793261  165698 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:06:03.919738  165698 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:06:04.333408  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:06.333963  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:03.921593  165698 out.go:204]   - Booting up control plane ...
	I0617 12:06:03.921708  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:06:03.928168  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:06:03.928279  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:06:03.937197  165698 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:06:03.939967  165698 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 12:06:04.644102  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:06:04.648733  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 200:
	ok
	I0617 12:06:04.649862  165060 api_server.go:141] control plane version: v1.30.1
	I0617 12:06:04.649894  165060 api_server.go:131] duration metric: took 3.86829173s to wait for apiserver health ...
	I0617 12:06:04.649905  165060 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:06:04.649936  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:04.649997  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:04.688904  165060 cri.go:89] found id: "5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:04.688923  165060 cri.go:89] found id: ""
	I0617 12:06:04.688931  165060 logs.go:276] 1 containers: [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3]
	I0617 12:06:04.688975  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.695049  165060 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:04.695110  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:04.730292  165060 cri.go:89] found id: "fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:04.730314  165060 cri.go:89] found id: ""
	I0617 12:06:04.730322  165060 logs.go:276] 1 containers: [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9]
	I0617 12:06:04.730373  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.734432  165060 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:04.734486  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:04.771401  165060 cri.go:89] found id: "c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:04.771418  165060 cri.go:89] found id: ""
	I0617 12:06:04.771426  165060 logs.go:276] 1 containers: [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7]
	I0617 12:06:04.771496  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.775822  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:04.775876  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:04.816111  165060 cri.go:89] found id: "157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:04.816131  165060 cri.go:89] found id: ""
	I0617 12:06:04.816139  165060 logs.go:276] 1 containers: [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d]
	I0617 12:06:04.816185  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.820614  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:04.820672  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:04.865387  165060 cri.go:89] found id: "c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:04.865411  165060 cri.go:89] found id: ""
	I0617 12:06:04.865421  165060 logs.go:276] 1 containers: [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d]
	I0617 12:06:04.865479  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.870192  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:04.870263  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:04.912698  165060 cri.go:89] found id: "2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:06:04.912723  165060 cri.go:89] found id: ""
	I0617 12:06:04.912734  165060 logs.go:276] 1 containers: [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079]
	I0617 12:06:04.912796  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.917484  165060 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:04.917563  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:04.954076  165060 cri.go:89] found id: ""
	I0617 12:06:04.954109  165060 logs.go:276] 0 containers: []
	W0617 12:06:04.954120  165060 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:04.954129  165060 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:04.954196  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:04.995832  165060 cri.go:89] found id: "02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:04.995858  165060 cri.go:89] found id: "7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:04.995862  165060 cri.go:89] found id: ""
	I0617 12:06:04.995869  165060 logs.go:276] 2 containers: [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36]
	I0617 12:06:04.995928  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:05.000741  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:05.004995  165060 logs.go:123] Gathering logs for storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] ...
	I0617 12:06:05.005026  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:05.040651  165060 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:05.040692  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:05.461644  165060 logs.go:123] Gathering logs for container status ...
	I0617 12:06:05.461685  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:05.508706  165060 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:05.508733  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:05.562418  165060 logs.go:123] Gathering logs for kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] ...
	I0617 12:06:05.562461  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:05.606489  165060 logs.go:123] Gathering logs for etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] ...
	I0617 12:06:05.606527  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:05.651719  165060 logs.go:123] Gathering logs for coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] ...
	I0617 12:06:05.651753  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:05.688736  165060 logs.go:123] Gathering logs for kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] ...
	I0617 12:06:05.688772  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:05.730649  165060 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:05.730679  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:05.745482  165060 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:05.745511  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:05.849002  165060 logs.go:123] Gathering logs for kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] ...
	I0617 12:06:05.849025  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:05.890802  165060 logs.go:123] Gathering logs for kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] ...
	I0617 12:06:05.890836  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:06:05.946444  165060 logs.go:123] Gathering logs for storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] ...
	I0617 12:06:05.946474  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:04.332977  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:06.834683  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:08.489561  165060 system_pods.go:59] 8 kube-system pods found
	I0617 12:06:08.489593  165060 system_pods.go:61] "coredns-7db6d8ff4d-9bbjg" [1ba0eee5-436e-4c83-b5ce-3c907d66b641] Running
	I0617 12:06:08.489597  165060 system_pods.go:61] "etcd-embed-certs-136195" [6dc81a80-c56b-4517-af82-c450cf9578f5] Running
	I0617 12:06:08.489601  165060 system_pods.go:61] "kube-apiserver-embed-certs-136195" [bd61a715-2471-4dca-aa48-a157531ebd6b] Running
	I0617 12:06:08.489605  165060 system_pods.go:61] "kube-controller-manager-embed-certs-136195" [194db4b0-75c2-4905-8e4d-813185497b51] Running
	I0617 12:06:08.489607  165060 system_pods.go:61] "kube-proxy-25d5n" [52b6d09a-899f-40c4-b1f3-7842ae755165] Running
	I0617 12:06:08.489610  165060 system_pods.go:61] "kube-scheduler-embed-certs-136195" [b04d3798-f465-4f82-9ec7-777ea62d5b94] Running
	I0617 12:06:08.489616  165060 system_pods.go:61] "metrics-server-569cc877fc-dmhfs" [31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:08.489620  165060 system_pods.go:61] "storage-provisioner" [4b04a38a-5006-4496-a24d-0940029193de] Running
	I0617 12:06:08.489626  165060 system_pods.go:74] duration metric: took 3.839715717s to wait for pod list to return data ...
	I0617 12:06:08.489633  165060 default_sa.go:34] waiting for default service account to be created ...
	I0617 12:06:08.491984  165060 default_sa.go:45] found service account: "default"
	I0617 12:06:08.492007  165060 default_sa.go:55] duration metric: took 2.365306ms for default service account to be created ...
	I0617 12:06:08.492014  165060 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 12:06:08.497834  165060 system_pods.go:86] 8 kube-system pods found
	I0617 12:06:08.497865  165060 system_pods.go:89] "coredns-7db6d8ff4d-9bbjg" [1ba0eee5-436e-4c83-b5ce-3c907d66b641] Running
	I0617 12:06:08.497873  165060 system_pods.go:89] "etcd-embed-certs-136195" [6dc81a80-c56b-4517-af82-c450cf9578f5] Running
	I0617 12:06:08.497880  165060 system_pods.go:89] "kube-apiserver-embed-certs-136195" [bd61a715-2471-4dca-aa48-a157531ebd6b] Running
	I0617 12:06:08.497887  165060 system_pods.go:89] "kube-controller-manager-embed-certs-136195" [194db4b0-75c2-4905-8e4d-813185497b51] Running
	I0617 12:06:08.497891  165060 system_pods.go:89] "kube-proxy-25d5n" [52b6d09a-899f-40c4-b1f3-7842ae755165] Running
	I0617 12:06:08.497899  165060 system_pods.go:89] "kube-scheduler-embed-certs-136195" [b04d3798-f465-4f82-9ec7-777ea62d5b94] Running
	I0617 12:06:08.497905  165060 system_pods.go:89] "metrics-server-569cc877fc-dmhfs" [31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:08.497914  165060 system_pods.go:89] "storage-provisioner" [4b04a38a-5006-4496-a24d-0940029193de] Running
	I0617 12:06:08.497921  165060 system_pods.go:126] duration metric: took 5.901391ms to wait for k8s-apps to be running ...
	I0617 12:06:08.497927  165060 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 12:06:08.497970  165060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:08.520136  165060 system_svc.go:56] duration metric: took 22.203601ms WaitForService to wait for kubelet
	I0617 12:06:08.520159  165060 kubeadm.go:576] duration metric: took 4m22.614222011s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:06:08.520178  165060 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:06:08.522704  165060 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:06:08.522741  165060 node_conditions.go:123] node cpu capacity is 2
	I0617 12:06:08.522758  165060 node_conditions.go:105] duration metric: took 2.57391ms to run NodePressure ...
	I0617 12:06:08.522773  165060 start.go:240] waiting for startup goroutines ...
	I0617 12:06:08.522787  165060 start.go:245] waiting for cluster config update ...
	I0617 12:06:08.522803  165060 start.go:254] writing updated cluster config ...
	I0617 12:06:08.523139  165060 ssh_runner.go:195] Run: rm -f paused
	I0617 12:06:08.577942  165060 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 12:06:08.579946  165060 out.go:177] * Done! kubectl is now configured to use "embed-certs-136195" cluster and "default" namespace by default
	I0617 12:06:08.334463  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:10.335642  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:09.331628  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:11.332586  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:13.332703  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:12.834827  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:15.334721  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:15.333004  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:17.834357  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:17.833756  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:19.835364  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:22.333742  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:20.332127  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:22.832111  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:24.333945  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:26.335021  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:25.332366  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:27.835364  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:28.833758  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:31.334155  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:29.835500  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:32.332236  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:33.833599  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:35.834190  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:34.831122  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:36.833202  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:38.334352  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:40.335399  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:40.335423  166103 pod_ready.go:81] duration metric: took 4m0.008367222s for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	E0617 12:06:40.335433  166103 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0617 12:06:40.335441  166103 pod_ready.go:38] duration metric: took 4m7.419505963s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:06:40.335475  166103 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:06:40.335505  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:40.335556  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:40.400354  166103 cri.go:89] found id: "5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:40.400384  166103 cri.go:89] found id: ""
	I0617 12:06:40.400394  166103 logs.go:276] 1 containers: [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b]
	I0617 12:06:40.400453  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.405124  166103 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:40.405186  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:40.440583  166103 cri.go:89] found id: "8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:40.440610  166103 cri.go:89] found id: ""
	I0617 12:06:40.440619  166103 logs.go:276] 1 containers: [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862]
	I0617 12:06:40.440665  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.445086  166103 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:40.445141  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:40.489676  166103 cri.go:89] found id: "26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:40.489698  166103 cri.go:89] found id: ""
	I0617 12:06:40.489706  166103 logs.go:276] 1 containers: [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323]
	I0617 12:06:40.489752  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.494402  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:40.494514  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:40.535486  166103 cri.go:89] found id: "2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:40.535517  166103 cri.go:89] found id: ""
	I0617 12:06:40.535527  166103 logs.go:276] 1 containers: [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b]
	I0617 12:06:40.535589  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.543265  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:40.543330  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:40.579564  166103 cri.go:89] found id: "63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:40.579588  166103 cri.go:89] found id: ""
	I0617 12:06:40.579598  166103 logs.go:276] 1 containers: [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da]
	I0617 12:06:40.579658  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.583865  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:40.583928  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:40.642408  166103 cri.go:89] found id: "36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:40.642435  166103 cri.go:89] found id: ""
	I0617 12:06:40.642445  166103 logs.go:276] 1 containers: [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685]
	I0617 12:06:40.642509  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.647892  166103 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:40.647959  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:40.698654  166103 cri.go:89] found id: ""
	I0617 12:06:40.698686  166103 logs.go:276] 0 containers: []
	W0617 12:06:40.698696  166103 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:40.698704  166103 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:40.698768  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:40.749641  166103 cri.go:89] found id: "adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:40.749663  166103 cri.go:89] found id: "e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:40.749668  166103 cri.go:89] found id: ""
	I0617 12:06:40.749678  166103 logs.go:276] 2 containers: [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc]
	I0617 12:06:40.749742  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.754926  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.760126  166103 logs.go:123] Gathering logs for container status ...
	I0617 12:06:40.760152  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:40.804119  166103 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:40.804159  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:40.942459  166103 logs.go:123] Gathering logs for etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] ...
	I0617 12:06:40.942495  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:40.994721  166103 logs.go:123] Gathering logs for coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] ...
	I0617 12:06:40.994761  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:41.037005  166103 logs.go:123] Gathering logs for kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] ...
	I0617 12:06:41.037040  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:41.080715  166103 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:41.080751  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:41.606478  166103 logs.go:123] Gathering logs for storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] ...
	I0617 12:06:41.606516  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:41.643963  166103 logs.go:123] Gathering logs for storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] ...
	I0617 12:06:41.644003  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:41.683405  166103 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:41.683443  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:41.737365  166103 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:41.737400  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:41.752552  166103 logs.go:123] Gathering logs for kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] ...
	I0617 12:06:41.752582  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:41.804447  166103 logs.go:123] Gathering logs for kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] ...
	I0617 12:06:41.804480  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:41.847266  166103 logs.go:123] Gathering logs for kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] ...
	I0617 12:06:41.847302  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:39.333111  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:41.836327  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:44.408776  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:44.427500  166103 api_server.go:72] duration metric: took 4m19.25316479s to wait for apiserver process to appear ...
	I0617 12:06:44.427531  166103 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:06:44.427577  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:44.427634  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:44.466379  166103 cri.go:89] found id: "5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:44.466408  166103 cri.go:89] found id: ""
	I0617 12:06:44.466418  166103 logs.go:276] 1 containers: [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b]
	I0617 12:06:44.466481  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.470832  166103 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:44.470901  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:44.511689  166103 cri.go:89] found id: "8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:44.511713  166103 cri.go:89] found id: ""
	I0617 12:06:44.511722  166103 logs.go:276] 1 containers: [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862]
	I0617 12:06:44.511769  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.516221  166103 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:44.516303  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:44.560612  166103 cri.go:89] found id: "26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:44.560634  166103 cri.go:89] found id: ""
	I0617 12:06:44.560642  166103 logs.go:276] 1 containers: [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323]
	I0617 12:06:44.560695  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.564998  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:44.565068  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:44.600133  166103 cri.go:89] found id: "2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:44.600155  166103 cri.go:89] found id: ""
	I0617 12:06:44.600164  166103 logs.go:276] 1 containers: [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b]
	I0617 12:06:44.600220  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.605431  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:44.605494  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:44.648647  166103 cri.go:89] found id: "63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:44.648678  166103 cri.go:89] found id: ""
	I0617 12:06:44.648688  166103 logs.go:276] 1 containers: [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da]
	I0617 12:06:44.648758  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.653226  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:44.653307  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:44.701484  166103 cri.go:89] found id: "36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:44.701508  166103 cri.go:89] found id: ""
	I0617 12:06:44.701516  166103 logs.go:276] 1 containers: [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685]
	I0617 12:06:44.701572  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.707827  166103 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:44.707890  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:44.752362  166103 cri.go:89] found id: ""
	I0617 12:06:44.752391  166103 logs.go:276] 0 containers: []
	W0617 12:06:44.752402  166103 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:44.752410  166103 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:44.752473  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:44.798926  166103 cri.go:89] found id: "adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:44.798955  166103 cri.go:89] found id: "e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:44.798961  166103 cri.go:89] found id: ""
	I0617 12:06:44.798970  166103 logs.go:276] 2 containers: [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc]
	I0617 12:06:44.799038  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.804702  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.810673  166103 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:44.810702  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:44.939596  166103 logs.go:123] Gathering logs for etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] ...
	I0617 12:06:44.939627  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:44.987902  166103 logs.go:123] Gathering logs for coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] ...
	I0617 12:06:44.987936  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:45.023931  166103 logs.go:123] Gathering logs for kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] ...
	I0617 12:06:45.023962  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:45.060432  166103 logs.go:123] Gathering logs for storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] ...
	I0617 12:06:45.060468  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:45.095643  166103 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:45.095679  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:45.553973  166103 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:45.554018  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:45.611997  166103 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:45.612036  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:45.626973  166103 logs.go:123] Gathering logs for container status ...
	I0617 12:06:45.627002  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:45.671119  166103 logs.go:123] Gathering logs for kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] ...
	I0617 12:06:45.671151  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:45.728097  166103 logs.go:123] Gathering logs for storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] ...
	I0617 12:06:45.728133  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:45.765586  166103 logs.go:123] Gathering logs for kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] ...
	I0617 12:06:45.765615  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:45.818347  166103 logs.go:123] Gathering logs for kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] ...
	I0617 12:06:45.818387  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:43.941225  165698 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0617 12:06:43.941341  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:43.941612  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:06:44.331481  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:46.831820  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:48.362826  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:06:48.366936  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 200:
	ok
	I0617 12:06:48.367973  166103 api_server.go:141] control plane version: v1.30.1
	I0617 12:06:48.367992  166103 api_server.go:131] duration metric: took 3.940452539s to wait for apiserver health ...
	I0617 12:06:48.367999  166103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:06:48.368021  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:48.368066  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:48.404797  166103 cri.go:89] found id: "5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:48.404819  166103 cri.go:89] found id: ""
	I0617 12:06:48.404828  166103 logs.go:276] 1 containers: [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b]
	I0617 12:06:48.404887  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.409105  166103 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:48.409162  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:48.456233  166103 cri.go:89] found id: "8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:48.456266  166103 cri.go:89] found id: ""
	I0617 12:06:48.456277  166103 logs.go:276] 1 containers: [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862]
	I0617 12:06:48.456336  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.460550  166103 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:48.460625  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:48.498447  166103 cri.go:89] found id: "26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:48.498472  166103 cri.go:89] found id: ""
	I0617 12:06:48.498481  166103 logs.go:276] 1 containers: [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323]
	I0617 12:06:48.498564  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.503826  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:48.503906  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:48.554405  166103 cri.go:89] found id: "2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:48.554435  166103 cri.go:89] found id: ""
	I0617 12:06:48.554446  166103 logs.go:276] 1 containers: [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b]
	I0617 12:06:48.554504  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.559175  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:48.559240  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:48.596764  166103 cri.go:89] found id: "63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:48.596791  166103 cri.go:89] found id: ""
	I0617 12:06:48.596801  166103 logs.go:276] 1 containers: [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da]
	I0617 12:06:48.596863  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.601197  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:48.601260  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:48.654027  166103 cri.go:89] found id: "36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:48.654053  166103 cri.go:89] found id: ""
	I0617 12:06:48.654061  166103 logs.go:276] 1 containers: [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685]
	I0617 12:06:48.654113  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.659492  166103 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:48.659557  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:48.706749  166103 cri.go:89] found id: ""
	I0617 12:06:48.706777  166103 logs.go:276] 0 containers: []
	W0617 12:06:48.706786  166103 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:48.706794  166103 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:48.706859  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:48.750556  166103 cri.go:89] found id: "adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:48.750588  166103 cri.go:89] found id: "e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:48.750594  166103 cri.go:89] found id: ""
	I0617 12:06:48.750607  166103 logs.go:276] 2 containers: [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc]
	I0617 12:06:48.750671  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.755368  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.760128  166103 logs.go:123] Gathering logs for kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] ...
	I0617 12:06:48.760154  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:48.802187  166103 logs.go:123] Gathering logs for etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] ...
	I0617 12:06:48.802224  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:48.861041  166103 logs.go:123] Gathering logs for kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] ...
	I0617 12:06:48.861076  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:48.917864  166103 logs.go:123] Gathering logs for storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] ...
	I0617 12:06:48.917902  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:48.963069  166103 logs.go:123] Gathering logs for container status ...
	I0617 12:06:48.963099  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:49.012109  166103 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:49.012149  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:49.119880  166103 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:49.119915  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:49.136461  166103 logs.go:123] Gathering logs for coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] ...
	I0617 12:06:49.136497  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:49.177339  166103 logs.go:123] Gathering logs for kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] ...
	I0617 12:06:49.177377  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:49.219101  166103 logs.go:123] Gathering logs for kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] ...
	I0617 12:06:49.219135  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:49.256646  166103 logs.go:123] Gathering logs for storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] ...
	I0617 12:06:49.256687  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:49.302208  166103 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:49.302243  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:49.653713  166103 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:49.653758  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:52.217069  166103 system_pods.go:59] 8 kube-system pods found
	I0617 12:06:52.217102  166103 system_pods.go:61] "coredns-7db6d8ff4d-mnw24" [1e6c4ff3-f0dc-43da-abd8-baaed7dca40c] Running
	I0617 12:06:52.217107  166103 system_pods.go:61] "etcd-default-k8s-diff-port-991309" [820a4f27-cf83-4edb-a2ea-edba6673d851] Running
	I0617 12:06:52.217111  166103 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991309" [26e6c19d-6f70-4924-83f5-563c8508c9e3] Running
	I0617 12:06:52.217115  166103 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991309" [01e7c468-98a6-48f3-a158-59e97fa8279c] Running
	I0617 12:06:52.217119  166103 system_pods.go:61] "kube-proxy-jn5kp" [d6935148-7ee8-4655-8327-9f1ee4c933de] Running
	I0617 12:06:52.217122  166103 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991309" [53ecd22c-05cf-48a5-b7e5-925392085f7a] Running
	I0617 12:06:52.217128  166103 system_pods.go:61] "metrics-server-569cc877fc-n2svp" [5b637d97-3183-4324-98cf-dd69a2968578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:52.217134  166103 system_pods.go:61] "storage-provisioner" [92b20aec-29c2-4256-86be-7f58f66585dd] Running
	I0617 12:06:52.217145  166103 system_pods.go:74] duration metric: took 3.849140024s to wait for pod list to return data ...
	I0617 12:06:52.217152  166103 default_sa.go:34] waiting for default service account to be created ...
	I0617 12:06:52.219308  166103 default_sa.go:45] found service account: "default"
	I0617 12:06:52.219330  166103 default_sa.go:55] duration metric: took 2.172323ms for default service account to be created ...
	I0617 12:06:52.219339  166103 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 12:06:52.224239  166103 system_pods.go:86] 8 kube-system pods found
	I0617 12:06:52.224265  166103 system_pods.go:89] "coredns-7db6d8ff4d-mnw24" [1e6c4ff3-f0dc-43da-abd8-baaed7dca40c] Running
	I0617 12:06:52.224270  166103 system_pods.go:89] "etcd-default-k8s-diff-port-991309" [820a4f27-cf83-4edb-a2ea-edba6673d851] Running
	I0617 12:06:52.224276  166103 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-991309" [26e6c19d-6f70-4924-83f5-563c8508c9e3] Running
	I0617 12:06:52.224280  166103 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-991309" [01e7c468-98a6-48f3-a158-59e97fa8279c] Running
	I0617 12:06:52.224284  166103 system_pods.go:89] "kube-proxy-jn5kp" [d6935148-7ee8-4655-8327-9f1ee4c933de] Running
	I0617 12:06:52.224288  166103 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-991309" [53ecd22c-05cf-48a5-b7e5-925392085f7a] Running
	I0617 12:06:52.224299  166103 system_pods.go:89] "metrics-server-569cc877fc-n2svp" [5b637d97-3183-4324-98cf-dd69a2968578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:52.224305  166103 system_pods.go:89] "storage-provisioner" [92b20aec-29c2-4256-86be-7f58f66585dd] Running
	I0617 12:06:52.224319  166103 system_pods.go:126] duration metric: took 4.973603ms to wait for k8s-apps to be running ...
	I0617 12:06:52.224332  166103 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 12:06:52.224380  166103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:52.241121  166103 system_svc.go:56] duration metric: took 16.776061ms WaitForService to wait for kubelet
	I0617 12:06:52.241156  166103 kubeadm.go:576] duration metric: took 4m27.066827271s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:06:52.241181  166103 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:06:52.245359  166103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:06:52.245407  166103 node_conditions.go:123] node cpu capacity is 2
	I0617 12:06:52.245423  166103 node_conditions.go:105] duration metric: took 4.235898ms to run NodePressure ...
	I0617 12:06:52.245440  166103 start.go:240] waiting for startup goroutines ...
	I0617 12:06:52.245449  166103 start.go:245] waiting for cluster config update ...
	I0617 12:06:52.245462  166103 start.go:254] writing updated cluster config ...
	I0617 12:06:52.245969  166103 ssh_runner.go:195] Run: rm -f paused
	I0617 12:06:52.299326  166103 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 12:06:52.301413  166103 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-991309" cluster and "default" namespace by default
	I0617 12:06:48.942159  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:48.942434  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:06:48.835113  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:51.331395  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:53.331551  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:55.332455  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:57.835143  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:58.942977  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:58.943290  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:00.331823  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:02.332214  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:04.831284  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:06.832082  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:07.325414  164809 pod_ready.go:81] duration metric: took 4m0.000322555s for pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace to be "Ready" ...
	E0617 12:07:07.325446  164809 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0617 12:07:07.325464  164809 pod_ready.go:38] duration metric: took 4m12.035995337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:07:07.325494  164809 kubeadm.go:591] duration metric: took 4m19.041266463s to restartPrimaryControlPlane
	W0617 12:07:07.325556  164809 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0617 12:07:07.325587  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:07:18.944149  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:07:18.944368  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:38.980378  164809 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.654762508s)
	I0617 12:07:38.980451  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:07:38.997845  164809 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:07:39.009456  164809 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:07:39.020407  164809 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:07:39.020430  164809 kubeadm.go:156] found existing configuration files:
	
	I0617 12:07:39.020472  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:07:39.030323  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:07:39.030376  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:07:39.040298  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:07:39.049715  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:07:39.049757  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:07:39.060493  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:07:39.069921  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:07:39.069973  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:07:39.080049  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:07:39.089524  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:07:39.089569  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:07:39.099082  164809 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:07:39.154963  164809 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0617 12:07:39.155083  164809 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:07:39.286616  164809 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:07:39.286809  164809 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:07:39.286977  164809 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:07:39.487542  164809 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:07:39.489554  164809 out.go:204]   - Generating certificates and keys ...
	I0617 12:07:39.489665  164809 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:07:39.489732  164809 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:07:39.489855  164809 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:07:39.489969  164809 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:07:39.490088  164809 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:07:39.490187  164809 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:07:39.490274  164809 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:07:39.490386  164809 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:07:39.490508  164809 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:07:39.490643  164809 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:07:39.490750  164809 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:07:39.490849  164809 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:07:39.565788  164809 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:07:39.643443  164809 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0617 12:07:39.765615  164809 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:07:39.851182  164809 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:07:40.041938  164809 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:07:40.042576  164809 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:07:40.045112  164809 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:07:40.047144  164809 out.go:204]   - Booting up control plane ...
	I0617 12:07:40.047265  164809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:07:40.047374  164809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:07:40.047995  164809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:07:40.070163  164809 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:07:40.071308  164809 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:07:40.071415  164809 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:07:40.204578  164809 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0617 12:07:40.204698  164809 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0617 12:07:41.210782  164809 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.0065421s
	I0617 12:07:41.210902  164809 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0617 12:07:45.713194  164809 kubeadm.go:309] [api-check] The API server is healthy after 4.501871798s
	I0617 12:07:45.735311  164809 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0617 12:07:45.760405  164809 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0617 12:07:45.795429  164809 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0617 12:07:45.795770  164809 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-152830 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0617 12:07:45.816446  164809 kubeadm.go:309] [bootstrap-token] Using token: ryfqxd.olkegn8a1unpvnbq
	I0617 12:07:45.817715  164809 out.go:204]   - Configuring RBAC rules ...
	I0617 12:07:45.817890  164809 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0617 12:07:45.826422  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0617 12:07:45.852291  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0617 12:07:45.867538  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0617 12:07:45.880697  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0617 12:07:45.887707  164809 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0617 12:07:46.120211  164809 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0617 12:07:46.593168  164809 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0617 12:07:47.119377  164809 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0617 12:07:47.120840  164809 kubeadm.go:309] 
	I0617 12:07:47.120933  164809 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0617 12:07:47.120947  164809 kubeadm.go:309] 
	I0617 12:07:47.121057  164809 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0617 12:07:47.121069  164809 kubeadm.go:309] 
	I0617 12:07:47.121123  164809 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0617 12:07:47.124361  164809 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0617 12:07:47.124443  164809 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0617 12:07:47.124464  164809 kubeadm.go:309] 
	I0617 12:07:47.124538  164809 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0617 12:07:47.124550  164809 kubeadm.go:309] 
	I0617 12:07:47.124607  164809 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0617 12:07:47.124617  164809 kubeadm.go:309] 
	I0617 12:07:47.124724  164809 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0617 12:07:47.124838  164809 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0617 12:07:47.124938  164809 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0617 12:07:47.124949  164809 kubeadm.go:309] 
	I0617 12:07:47.125085  164809 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0617 12:07:47.125191  164809 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0617 12:07:47.125203  164809 kubeadm.go:309] 
	I0617 12:07:47.125343  164809 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ryfqxd.olkegn8a1unpvnbq \
	I0617 12:07:47.125479  164809 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 \
	I0617 12:07:47.125510  164809 kubeadm.go:309] 	--control-plane 
	I0617 12:07:47.125518  164809 kubeadm.go:309] 
	I0617 12:07:47.125616  164809 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0617 12:07:47.125627  164809 kubeadm.go:309] 
	I0617 12:07:47.125724  164809 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ryfqxd.olkegn8a1unpvnbq \
	I0617 12:07:47.125852  164809 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 
	I0617 12:07:47.126915  164809 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:07:47.126966  164809 cni.go:84] Creating CNI manager for ""
	I0617 12:07:47.126983  164809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:07:47.128899  164809 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:07:47.130229  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:07:47.142301  164809 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:07:47.163380  164809 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:07:47.163500  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:47.163503  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-152830 minikube.k8s.io/updated_at=2024_06_17T12_07_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6 minikube.k8s.io/name=no-preload-152830 minikube.k8s.io/primary=true
	I0617 12:07:47.375089  164809 ops.go:34] apiserver oom_adj: -16
	I0617 12:07:47.375266  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:47.875477  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:48.375626  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:48.876185  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:49.375621  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:49.875597  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:50.376188  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:50.875983  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:51.375537  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:51.876321  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:52.375920  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:52.876348  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:53.375623  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:53.875369  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:54.375747  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:54.875581  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:55.376244  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:55.875866  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:56.376285  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:56.876228  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:57.375990  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:57.875392  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:58.946943  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:07:58.947220  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:58.947233  165698 kubeadm.go:309] 
	I0617 12:07:58.947316  165698 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0617 12:07:58.947393  165698 kubeadm.go:309] 		timed out waiting for the condition
	I0617 12:07:58.947406  165698 kubeadm.go:309] 
	I0617 12:07:58.947449  165698 kubeadm.go:309] 	This error is likely caused by:
	I0617 12:07:58.947528  165698 kubeadm.go:309] 		- The kubelet is not running
	I0617 12:07:58.947690  165698 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0617 12:07:58.947699  165698 kubeadm.go:309] 
	I0617 12:07:58.947860  165698 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0617 12:07:58.947924  165698 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0617 12:07:58.947976  165698 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0617 12:07:58.947991  165698 kubeadm.go:309] 
	I0617 12:07:58.948132  165698 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0617 12:07:58.948247  165698 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0617 12:07:58.948260  165698 kubeadm.go:309] 
	I0617 12:07:58.948406  165698 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0617 12:07:58.948539  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0617 12:07:58.948639  165698 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0617 12:07:58.948740  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0617 12:07:58.948750  165698 kubeadm.go:309] 
	I0617 12:07:58.949270  165698 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:07:58.949403  165698 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0617 12:07:58.949508  165698 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0617 12:07:58.949630  165698 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0617 12:07:58.949694  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:07:59.418622  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:07:59.435367  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:07:59.449365  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:07:59.449384  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:07:59.449430  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:07:59.461411  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:07:59.461478  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:07:59.471262  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:07:59.480591  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:07:59.480640  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:07:59.490152  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:07:59.499248  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:07:59.499300  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:07:59.508891  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:07:59.518114  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:07:59.518152  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:07:59.528190  165698 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:07:59.592831  165698 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0617 12:07:59.592949  165698 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:07:59.752802  165698 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:07:59.752947  165698 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:07:59.753079  165698 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:07:59.984221  165698 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:07:58.375522  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:58.876221  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:59.375941  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:59.875924  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:08:00.063788  164809 kubeadm.go:1107] duration metric: took 12.900376954s to wait for elevateKubeSystemPrivileges
	W0617 12:08:00.063860  164809 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0617 12:08:00.063871  164809 kubeadm.go:393] duration metric: took 5m11.831587226s to StartCluster
	I0617 12:08:00.063895  164809 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:08:00.063996  164809 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:08:00.066593  164809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:08:00.066922  164809 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:08:00.068556  164809 out.go:177] * Verifying Kubernetes components...
	I0617 12:08:00.067029  164809 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:08:00.067131  164809 config.go:182] Loaded profile config "no-preload-152830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:08:00.069969  164809 addons.go:69] Setting storage-provisioner=true in profile "no-preload-152830"
	I0617 12:08:00.069983  164809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:08:00.069992  164809 addons.go:69] Setting metrics-server=true in profile "no-preload-152830"
	I0617 12:08:00.070015  164809 addons.go:234] Setting addon metrics-server=true in "no-preload-152830"
	I0617 12:08:00.070014  164809 addons.go:234] Setting addon storage-provisioner=true in "no-preload-152830"
	W0617 12:08:00.070021  164809 addons.go:243] addon metrics-server should already be in state true
	W0617 12:08:00.070024  164809 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:08:00.070055  164809 host.go:66] Checking if "no-preload-152830" exists ...
	I0617 12:08:00.070057  164809 host.go:66] Checking if "no-preload-152830" exists ...
	I0617 12:08:00.069984  164809 addons.go:69] Setting default-storageclass=true in profile "no-preload-152830"
	I0617 12:08:00.070116  164809 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-152830"
	I0617 12:08:00.070426  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.070428  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.070443  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.070451  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.070475  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.070494  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.088451  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I0617 12:08:00.089105  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.089673  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.089700  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.090074  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.090673  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.090723  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.091118  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0617 12:08:00.091150  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I0617 12:08:00.091756  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.091880  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.092306  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.092327  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.092470  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.092487  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.093006  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.093081  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.093169  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.093683  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.093722  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.096819  164809 addons.go:234] Setting addon default-storageclass=true in "no-preload-152830"
	W0617 12:08:00.096839  164809 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:08:00.096868  164809 host.go:66] Checking if "no-preload-152830" exists ...
	I0617 12:08:00.097223  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.097252  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.110063  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33623
	I0617 12:08:00.110843  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.111489  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.111509  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.112419  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.112633  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.112859  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39555
	I0617 12:08:00.113245  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.113927  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.113946  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.114470  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.114758  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:08:00.116377  164809 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:08:00.115146  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.117266  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I0617 12:08:00.117647  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:08:00.117663  164809 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:08:00.117674  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.117681  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:08:00.118504  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.119076  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.119091  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.119440  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.119755  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.121396  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.121620  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:08:00.123146  164809 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:07:59.986165  165698 out.go:204]   - Generating certificates and keys ...
	I0617 12:07:59.986270  165698 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:07:59.986391  165698 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:07:59.986522  165698 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:07:59.986606  165698 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:07:59.986717  165698 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:07:59.986795  165698 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:07:59.986887  165698 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:07:59.986972  165698 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:07:59.987081  165698 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:07:59.987191  165698 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:07:59.987250  165698 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:07:59.987331  165698 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:08:00.155668  165698 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:08:00.303780  165698 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:08:00.369907  165698 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:08:00.506550  165698 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:08:00.529943  165698 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:08:00.531684  165698 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:08:00.531756  165698 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:08:00.667972  165698 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:08:00.122003  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:08:00.122146  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:08:00.124748  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.124895  164809 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:08:00.124914  164809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:08:00.124934  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:08:00.124957  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:08:00.125142  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:08:00.125446  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:08:00.128559  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.128991  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:08:00.129011  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.129239  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:08:00.129434  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:08:00.129537  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:08:00.129640  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:08:00.142435  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39073
	I0617 12:08:00.142915  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.143550  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.143583  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.143946  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.144168  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.145972  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:08:00.146165  164809 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:08:00.146178  164809 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:08:00.146196  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:08:00.149316  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.149720  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:08:00.149743  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.149926  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:08:00.150106  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:08:00.150273  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:08:00.150434  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:08:00.294731  164809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:08:00.317727  164809 node_ready.go:35] waiting up to 6m0s for node "no-preload-152830" to be "Ready" ...
	I0617 12:08:00.346507  164809 node_ready.go:49] node "no-preload-152830" has status "Ready":"True"
	I0617 12:08:00.346533  164809 node_ready.go:38] duration metric: took 28.776898ms for node "no-preload-152830" to be "Ready" ...
	I0617 12:08:00.346544  164809 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:08:00.404097  164809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:00.412303  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:08:00.412325  164809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:08:00.415269  164809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:08:00.438024  164809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:08:00.514528  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:08:00.514561  164809 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:08:00.629109  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:08:00.629141  164809 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:08:00.677084  164809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:08:01.113979  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.114007  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.114432  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.114445  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.114507  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.114526  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.114536  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.114846  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.114866  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.117124  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.117141  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.117437  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.117457  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.117478  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.117496  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.117508  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.117821  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.117858  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.117882  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.125648  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.125668  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.125998  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.126020  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.126030  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.325217  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.325242  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.325579  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.325633  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.325669  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.325669  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.325682  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.325960  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.325977  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.326007  164809 addons.go:475] Verifying addon metrics-server=true in "no-preload-152830"
	I0617 12:08:01.326037  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.327744  164809 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0617 12:08:00.671036  165698 out.go:204]   - Booting up control plane ...
	I0617 12:08:00.671171  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:08:00.677241  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:08:00.678999  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:08:00.681119  165698 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:08:00.684535  165698 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 12:08:01.329155  164809 addons.go:510] duration metric: took 1.262127108s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0617 12:08:02.425731  164809 pod_ready.go:102] pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace has status "Ready":"False"
	I0617 12:08:03.910467  164809 pod_ready.go:92] pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.910494  164809 pod_ready.go:81] duration metric: took 3.506370946s for pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.910508  164809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vz7dg" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.916309  164809 pod_ready.go:92] pod "coredns-7db6d8ff4d-vz7dg" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.916331  164809 pod_ready.go:81] duration metric: took 5.814812ms for pod "coredns-7db6d8ff4d-vz7dg" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.916340  164809 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.920834  164809 pod_ready.go:92] pod "etcd-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.920862  164809 pod_ready.go:81] duration metric: took 4.51438ms for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.920874  164809 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.924955  164809 pod_ready.go:92] pod "kube-apiserver-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.924973  164809 pod_ready.go:81] duration metric: took 4.09301ms for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.924982  164809 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.929301  164809 pod_ready.go:92] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.929318  164809 pod_ready.go:81] duration metric: took 4.33061ms for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.929326  164809 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:04.308546  164809 pod_ready.go:92] pod "kube-scheduler-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:04.308570  164809 pod_ready.go:81] duration metric: took 379.237147ms for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:04.308578  164809 pod_ready.go:38] duration metric: took 3.962022714s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:08:04.308594  164809 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:08:04.308644  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:08:04.327383  164809 api_server.go:72] duration metric: took 4.260420928s to wait for apiserver process to appear ...
	I0617 12:08:04.327408  164809 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:08:04.327426  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:08:04.332321  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0617 12:08:04.333390  164809 api_server.go:141] control plane version: v1.30.1
	I0617 12:08:04.333412  164809 api_server.go:131] duration metric: took 5.998312ms to wait for apiserver health ...
	I0617 12:08:04.333420  164809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:08:04.512267  164809 system_pods.go:59] 9 kube-system pods found
	I0617 12:08:04.512298  164809 system_pods.go:61] "coredns-7db6d8ff4d-gjt84" [979c7339-3a4c-4bc8-8586-4d9da42339ae] Running
	I0617 12:08:04.512302  164809 system_pods.go:61] "coredns-7db6d8ff4d-vz7dg" [53c5188e-bc44-4aed-a989-ef3e2379c27b] Running
	I0617 12:08:04.512306  164809 system_pods.go:61] "etcd-no-preload-152830" [2b82d709-0776-470a-a538-f132b84be2e0] Running
	I0617 12:08:04.512310  164809 system_pods.go:61] "kube-apiserver-no-preload-152830" [e40c7c7b-b029-4f65-ac36-f4ff95eabc23] Running
	I0617 12:08:04.512313  164809 system_pods.go:61] "kube-controller-manager-no-preload-152830" [c2adec58-05a4-4993-b9a3-28f9ef519a63] Running
	I0617 12:08:04.512317  164809 system_pods.go:61] "kube-proxy-6c4hm" [a9830236-af96-437f-ad07-494b25f1a90e] Running
	I0617 12:08:04.512319  164809 system_pods.go:61] "kube-scheduler-no-preload-152830" [876671da-097b-43c1-9055-95c2ed7620aa] Running
	I0617 12:08:04.512325  164809 system_pods.go:61] "metrics-server-569cc877fc-zllzk" [e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:08:04.512329  164809 system_pods.go:61] "storage-provisioner" [b6cc7cdc-43f4-40c4-a202-5674fcdcedd0] Running
	I0617 12:08:04.512340  164809 system_pods.go:74] duration metric: took 178.914377ms to wait for pod list to return data ...
	I0617 12:08:04.512347  164809 default_sa.go:34] waiting for default service account to be created ...
	I0617 12:08:04.707834  164809 default_sa.go:45] found service account: "default"
	I0617 12:08:04.707874  164809 default_sa.go:55] duration metric: took 195.518331ms for default service account to be created ...
	I0617 12:08:04.707886  164809 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 12:08:04.916143  164809 system_pods.go:86] 9 kube-system pods found
	I0617 12:08:04.916173  164809 system_pods.go:89] "coredns-7db6d8ff4d-gjt84" [979c7339-3a4c-4bc8-8586-4d9da42339ae] Running
	I0617 12:08:04.916178  164809 system_pods.go:89] "coredns-7db6d8ff4d-vz7dg" [53c5188e-bc44-4aed-a989-ef3e2379c27b] Running
	I0617 12:08:04.916183  164809 system_pods.go:89] "etcd-no-preload-152830" [2b82d709-0776-470a-a538-f132b84be2e0] Running
	I0617 12:08:04.916187  164809 system_pods.go:89] "kube-apiserver-no-preload-152830" [e40c7c7b-b029-4f65-ac36-f4ff95eabc23] Running
	I0617 12:08:04.916191  164809 system_pods.go:89] "kube-controller-manager-no-preload-152830" [c2adec58-05a4-4993-b9a3-28f9ef519a63] Running
	I0617 12:08:04.916195  164809 system_pods.go:89] "kube-proxy-6c4hm" [a9830236-af96-437f-ad07-494b25f1a90e] Running
	I0617 12:08:04.916199  164809 system_pods.go:89] "kube-scheduler-no-preload-152830" [876671da-097b-43c1-9055-95c2ed7620aa] Running
	I0617 12:08:04.916211  164809 system_pods.go:89] "metrics-server-569cc877fc-zllzk" [e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:08:04.916219  164809 system_pods.go:89] "storage-provisioner" [b6cc7cdc-43f4-40c4-a202-5674fcdcedd0] Running
	I0617 12:08:04.916231  164809 system_pods.go:126] duration metric: took 208.336851ms to wait for k8s-apps to be running ...
	I0617 12:08:04.916245  164809 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 12:08:04.916306  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:08:04.933106  164809 system_svc.go:56] duration metric: took 16.850122ms WaitForService to wait for kubelet
	I0617 12:08:04.933135  164809 kubeadm.go:576] duration metric: took 4.866178671s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:08:04.933159  164809 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:08:05.108094  164809 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:08:05.108120  164809 node_conditions.go:123] node cpu capacity is 2
	I0617 12:08:05.108133  164809 node_conditions.go:105] duration metric: took 174.968414ms to run NodePressure ...
	I0617 12:08:05.108148  164809 start.go:240] waiting for startup goroutines ...
	I0617 12:08:05.108160  164809 start.go:245] waiting for cluster config update ...
	I0617 12:08:05.108173  164809 start.go:254] writing updated cluster config ...
	I0617 12:08:05.108496  164809 ssh_runner.go:195] Run: rm -f paused
	I0617 12:08:05.160610  164809 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 12:08:05.162777  164809 out.go:177] * Done! kubectl is now configured to use "no-preload-152830" cluster and "default" namespace by default
	I0617 12:08:40.686610  165698 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0617 12:08:40.686950  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:40.687194  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:08:45.687594  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:45.687820  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:08:55.688285  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:55.688516  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:15.689306  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:09:15.689556  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:55.688872  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:09:55.689162  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:55.689206  165698 kubeadm.go:309] 
	I0617 12:09:55.689284  165698 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0617 12:09:55.689342  165698 kubeadm.go:309] 		timed out waiting for the condition
	I0617 12:09:55.689354  165698 kubeadm.go:309] 
	I0617 12:09:55.689418  165698 kubeadm.go:309] 	This error is likely caused by:
	I0617 12:09:55.689480  165698 kubeadm.go:309] 		- The kubelet is not running
	I0617 12:09:55.689632  165698 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0617 12:09:55.689657  165698 kubeadm.go:309] 
	I0617 12:09:55.689791  165698 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0617 12:09:55.689844  165698 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0617 12:09:55.689916  165698 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0617 12:09:55.689926  165698 kubeadm.go:309] 
	I0617 12:09:55.690059  165698 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0617 12:09:55.690140  165698 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0617 12:09:55.690159  165698 kubeadm.go:309] 
	I0617 12:09:55.690258  165698 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0617 12:09:55.690343  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0617 12:09:55.690434  165698 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0617 12:09:55.690530  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0617 12:09:55.690546  165698 kubeadm.go:309] 
	I0617 12:09:55.691495  165698 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:09:55.691595  165698 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0617 12:09:55.691708  165698 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0617 12:09:55.691787  165698 kubeadm.go:393] duration metric: took 7m57.151326537s to StartCluster
	I0617 12:09:55.691844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:09:55.691904  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:09:55.746514  165698 cri.go:89] found id: ""
	I0617 12:09:55.746550  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.746563  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:09:55.746572  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:09:55.746636  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:09:55.789045  165698 cri.go:89] found id: ""
	I0617 12:09:55.789083  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.789095  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:09:55.789103  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:09:55.789169  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:09:55.829492  165698 cri.go:89] found id: ""
	I0617 12:09:55.829533  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.829542  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:09:55.829547  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:09:55.829614  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:09:55.865213  165698 cri.go:89] found id: ""
	I0617 12:09:55.865246  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.865262  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:09:55.865267  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:09:55.865318  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:09:55.904067  165698 cri.go:89] found id: ""
	I0617 12:09:55.904102  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.904113  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:09:55.904122  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:09:55.904187  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:09:55.938441  165698 cri.go:89] found id: ""
	I0617 12:09:55.938471  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.938478  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:09:55.938487  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:09:55.938538  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:09:55.975669  165698 cri.go:89] found id: ""
	I0617 12:09:55.975710  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.975723  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:09:55.975731  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:09:55.975804  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:09:56.015794  165698 cri.go:89] found id: ""
	I0617 12:09:56.015826  165698 logs.go:276] 0 containers: []
	W0617 12:09:56.015837  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:09:56.015851  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:09:56.015868  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:09:56.095533  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:09:56.095557  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:09:56.095573  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:09:56.220817  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:09:56.220857  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:09:56.261470  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:09:56.261507  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:09:56.325626  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:09:56.325673  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0617 12:09:56.345438  165698 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0617 12:09:56.345491  165698 out.go:239] * 
	W0617 12:09:56.345606  165698 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0617 12:09:56.345635  165698 out.go:239] * 
	W0617 12:09:56.346583  165698 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 12:09:56.349928  165698 out.go:177] 
	W0617 12:09:56.351067  165698 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0617 12:09:56.351127  165698 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0617 12:09:56.351157  165698 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0617 12:09:56.352487  165698 out.go:177] 
	
	
	==> CRI-O <==
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.625740524Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626510625694135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a9ac0ee-46af-407c-82e3-93c041a4f082 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.626270345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=baaa6c58-3fba-4da8-af2b-4461e870e21c name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.626323693Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=baaa6c58-3fba-4da8-af2b-4461e870e21c name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.626508409Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:06fc8c454b52ae190c5e04968df2f4b778b273df8fd868edece76e82e1aa618e,PodSandboxId:0a2d4ee66975d8028039fe41452e1f2a3fb6571100f902ae428772608308b49d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718625712494553608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05a900e3-7714-4af1-ace9-eb03535da64a,},Annotations:map[string]string{io.kubernetes.container.hash: 95ceef43,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7,PodSandboxId:d8cdc6ff01f9171f3ad315ea48c690b50791c26874d90e0420b89d4f6c80d6d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718625711516797941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9bbjg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0eee5-436e-4c83-b5ce-3c907d66b641,},Annotations:map[string]string{io.kubernetes.container.hash: 9e5353ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92,PodSandboxId:da11ecedffb5492af81e1296b913c7844da92a6a33a7d5a0471890adac6ae58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718625704434508824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4b04a38a-5006-4496-a24d-0940029193de,},Annotations:map[string]string{io.kubernetes.container.hash: bbb7a6ad,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36,PodSandboxId:da11ecedffb5492af81e1296b913c7844da92a6a33a7d5a0471890adac6ae58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718625703747285662,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4b04a38a-5006-4496-a24d-0940029193de,},Annotations:map[string]string{io.kubernetes.container.hash: bbb7a6ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d,PodSandboxId:00f5ac611dd3173bd63432f2166f9b1c1515e0164ca44a072d3500c52b9ac720,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718625703706073514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25d5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52b6d09a-899f-40c4-b1f3-7842ae755
165,},Annotations:map[string]string{io.kubernetes.container.hash: 23086a39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d,PodSandboxId:2379b3f0e4841a43b541f5c15e5a70b752ffd5c366eed4c8b63518687ad29e5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625699292863859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c01d6f22a5109112fd47d72421c8a716,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9,PodSandboxId:6c616f25aff9be709d7133636307a067a952b328aab78ddf130784fdc9d42883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625699295147383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6212321f2ec0f29eea9399e7bace28fb,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b38de5c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3,PodSandboxId:e946fe67c58448b571b7b99a84f90edf971ba4599fa70e58a8abcdff5d97d4ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625699299463271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffc4724b55482bd6618c26321a6ec7a,},Annotations:map[string]string{io.kubernetes.container.hash:
7db5fa0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079,PodSandboxId:1243fcd3dd29fe226f3b2c1f3b185d07e05e8284a3a283c3adacfbb73c41a86c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625699286383405,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5b41313a2a936cb8a7ac0d4d722ccb,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=baaa6c58-3fba-4da8-af2b-4461e870e21c name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.664483036Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4601bb7-b7da-4834-a1fd-453eb5bd0c10 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.664591975Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4601bb7-b7da-4834-a1fd-453eb5bd0c10 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.665532936Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c4c63aa-6c49-46ad-a446-d7900d11bb6c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.666133995Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626510665888518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c4c63aa-6c49-46ad-a446-d7900d11bb6c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.666547028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f48ed86-e2ea-4fab-8518-c5023df92981 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.666593548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f48ed86-e2ea-4fab-8518-c5023df92981 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.666784766Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:06fc8c454b52ae190c5e04968df2f4b778b273df8fd868edece76e82e1aa618e,PodSandboxId:0a2d4ee66975d8028039fe41452e1f2a3fb6571100f902ae428772608308b49d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718625712494553608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05a900e3-7714-4af1-ace9-eb03535da64a,},Annotations:map[string]string{io.kubernetes.container.hash: 95ceef43,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7,PodSandboxId:d8cdc6ff01f9171f3ad315ea48c690b50791c26874d90e0420b89d4f6c80d6d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718625711516797941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9bbjg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0eee5-436e-4c83-b5ce-3c907d66b641,},Annotations:map[string]string{io.kubernetes.container.hash: 9e5353ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92,PodSandboxId:da11ecedffb5492af81e1296b913c7844da92a6a33a7d5a0471890adac6ae58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718625704434508824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4b04a38a-5006-4496-a24d-0940029193de,},Annotations:map[string]string{io.kubernetes.container.hash: bbb7a6ad,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36,PodSandboxId:da11ecedffb5492af81e1296b913c7844da92a6a33a7d5a0471890adac6ae58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718625703747285662,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4b04a38a-5006-4496-a24d-0940029193de,},Annotations:map[string]string{io.kubernetes.container.hash: bbb7a6ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d,PodSandboxId:00f5ac611dd3173bd63432f2166f9b1c1515e0164ca44a072d3500c52b9ac720,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718625703706073514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25d5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52b6d09a-899f-40c4-b1f3-7842ae755
165,},Annotations:map[string]string{io.kubernetes.container.hash: 23086a39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d,PodSandboxId:2379b3f0e4841a43b541f5c15e5a70b752ffd5c366eed4c8b63518687ad29e5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625699292863859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c01d6f22a5109112fd47d72421c8a716,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9,PodSandboxId:6c616f25aff9be709d7133636307a067a952b328aab78ddf130784fdc9d42883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625699295147383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6212321f2ec0f29eea9399e7bace28fb,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b38de5c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3,PodSandboxId:e946fe67c58448b571b7b99a84f90edf971ba4599fa70e58a8abcdff5d97d4ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625699299463271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffc4724b55482bd6618c26321a6ec7a,},Annotations:map[string]string{io.kubernetes.container.hash:
7db5fa0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079,PodSandboxId:1243fcd3dd29fe226f3b2c1f3b185d07e05e8284a3a283c3adacfbb73c41a86c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625699286383405,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5b41313a2a936cb8a7ac0d4d722ccb,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f48ed86-e2ea-4fab-8518-c5023df92981 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.703096560Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d57be7cb-e8bf-494c-9571-8e780bee0473 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.703180420Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d57be7cb-e8bf-494c-9571-8e780bee0473 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.704265053Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=61ce2435-191a-4aed-bfe1-1533d1958317 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.704678340Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626510704657069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61ce2435-191a-4aed-bfe1-1533d1958317 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.705323460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6de71686-6c6b-4a90-964f-720238882168 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.705375640Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6de71686-6c6b-4a90-964f-720238882168 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.705549180Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:06fc8c454b52ae190c5e04968df2f4b778b273df8fd868edece76e82e1aa618e,PodSandboxId:0a2d4ee66975d8028039fe41452e1f2a3fb6571100f902ae428772608308b49d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718625712494553608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05a900e3-7714-4af1-ace9-eb03535da64a,},Annotations:map[string]string{io.kubernetes.container.hash: 95ceef43,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7,PodSandboxId:d8cdc6ff01f9171f3ad315ea48c690b50791c26874d90e0420b89d4f6c80d6d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718625711516797941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9bbjg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0eee5-436e-4c83-b5ce-3c907d66b641,},Annotations:map[string]string{io.kubernetes.container.hash: 9e5353ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92,PodSandboxId:da11ecedffb5492af81e1296b913c7844da92a6a33a7d5a0471890adac6ae58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718625704434508824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4b04a38a-5006-4496-a24d-0940029193de,},Annotations:map[string]string{io.kubernetes.container.hash: bbb7a6ad,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36,PodSandboxId:da11ecedffb5492af81e1296b913c7844da92a6a33a7d5a0471890adac6ae58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718625703747285662,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4b04a38a-5006-4496-a24d-0940029193de,},Annotations:map[string]string{io.kubernetes.container.hash: bbb7a6ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d,PodSandboxId:00f5ac611dd3173bd63432f2166f9b1c1515e0164ca44a072d3500c52b9ac720,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718625703706073514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25d5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52b6d09a-899f-40c4-b1f3-7842ae755
165,},Annotations:map[string]string{io.kubernetes.container.hash: 23086a39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d,PodSandboxId:2379b3f0e4841a43b541f5c15e5a70b752ffd5c366eed4c8b63518687ad29e5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625699292863859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c01d6f22a5109112fd47d72421c8a716,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9,PodSandboxId:6c616f25aff9be709d7133636307a067a952b328aab78ddf130784fdc9d42883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625699295147383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6212321f2ec0f29eea9399e7bace28fb,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b38de5c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3,PodSandboxId:e946fe67c58448b571b7b99a84f90edf971ba4599fa70e58a8abcdff5d97d4ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625699299463271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffc4724b55482bd6618c26321a6ec7a,},Annotations:map[string]string{io.kubernetes.container.hash:
7db5fa0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079,PodSandboxId:1243fcd3dd29fe226f3b2c1f3b185d07e05e8284a3a283c3adacfbb73c41a86c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625699286383405,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5b41313a2a936cb8a7ac0d4d722ccb,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6de71686-6c6b-4a90-964f-720238882168 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.736771580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf34020f-3185-4ba1-a922-7801219b8f0f name=/runtime.v1.RuntimeService/Version
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.736858756Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf34020f-3185-4ba1-a922-7801219b8f0f name=/runtime.v1.RuntimeService/Version
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.737893885Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a85c8417-34b9-4ef4-8c30-db33a06fe76e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.738450543Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626510738371221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a85c8417-34b9-4ef4-8c30-db33a06fe76e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.738895584Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eddd15a8-9542-4de9-aa09-1d8436f60bc0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.738945894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eddd15a8-9542-4de9-aa09-1d8436f60bc0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:10 embed-certs-136195 crio[729]: time="2024-06-17 12:15:10.739185365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:06fc8c454b52ae190c5e04968df2f4b778b273df8fd868edece76e82e1aa618e,PodSandboxId:0a2d4ee66975d8028039fe41452e1f2a3fb6571100f902ae428772608308b49d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718625712494553608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05a900e3-7714-4af1-ace9-eb03535da64a,},Annotations:map[string]string{io.kubernetes.container.hash: 95ceef43,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7,PodSandboxId:d8cdc6ff01f9171f3ad315ea48c690b50791c26874d90e0420b89d4f6c80d6d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718625711516797941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9bbjg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0eee5-436e-4c83-b5ce-3c907d66b641,},Annotations:map[string]string{io.kubernetes.container.hash: 9e5353ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92,PodSandboxId:da11ecedffb5492af81e1296b913c7844da92a6a33a7d5a0471890adac6ae58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718625704434508824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4b04a38a-5006-4496-a24d-0940029193de,},Annotations:map[string]string{io.kubernetes.container.hash: bbb7a6ad,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36,PodSandboxId:da11ecedffb5492af81e1296b913c7844da92a6a33a7d5a0471890adac6ae58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718625703747285662,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4b04a38a-5006-4496-a24d-0940029193de,},Annotations:map[string]string{io.kubernetes.container.hash: bbb7a6ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d,PodSandboxId:00f5ac611dd3173bd63432f2166f9b1c1515e0164ca44a072d3500c52b9ac720,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718625703706073514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25d5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52b6d09a-899f-40c4-b1f3-7842ae755
165,},Annotations:map[string]string{io.kubernetes.container.hash: 23086a39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d,PodSandboxId:2379b3f0e4841a43b541f5c15e5a70b752ffd5c366eed4c8b63518687ad29e5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625699292863859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c01d6f22a5109112fd47d72421c8a716,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9,PodSandboxId:6c616f25aff9be709d7133636307a067a952b328aab78ddf130784fdc9d42883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625699295147383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6212321f2ec0f29eea9399e7bace28fb,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b38de5c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3,PodSandboxId:e946fe67c58448b571b7b99a84f90edf971ba4599fa70e58a8abcdff5d97d4ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625699299463271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffc4724b55482bd6618c26321a6ec7a,},Annotations:map[string]string{io.kubernetes.container.hash:
7db5fa0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079,PodSandboxId:1243fcd3dd29fe226f3b2c1f3b185d07e05e8284a3a283c3adacfbb73c41a86c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625699286383405,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5b41313a2a936cb8a7ac0d4d722ccb,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eddd15a8-9542-4de9-aa09-1d8436f60bc0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	06fc8c454b52a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   0a2d4ee66975d       busybox
	c610c7cafac56       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   d8cdc6ff01f91       coredns-7db6d8ff4d-9bbjg
	02e13a25f376f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       2                   da11ecedffb54       storage-provisioner
	7a03f8aca2ce9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   da11ecedffb54       storage-provisioner
	c2c534f434b08       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      13 minutes ago      Running             kube-proxy                1                   00f5ac611dd31       kube-proxy-25d5n
	5e7549e074802       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      13 minutes ago      Running             kube-apiserver            1                   e946fe67c5844       kube-apiserver-embed-certs-136195
	fb99e2cd3471d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   6c616f25aff9b       etcd-embed-certs-136195
	157a0a3401555       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      13 minutes ago      Running             kube-scheduler            1                   2379b3f0e4841       kube-scheduler-embed-certs-136195
	2436d81981855       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      13 minutes ago      Running             kube-controller-manager   1                   1243fcd3dd29f       kube-controller-manager-embed-certs-136195
	
	
	==> coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55932 - 50197 "HINFO IN 4346171118022230615.3943262594765871989. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028664505s
	
	
	==> describe nodes <==
	Name:               embed-certs-136195
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-136195
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=embed-certs-136195
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T11_53_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:53:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-136195
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 12:15:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 12:12:26 +0000   Mon, 17 Jun 2024 11:53:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 12:12:26 +0000   Mon, 17 Jun 2024 11:53:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 12:12:26 +0000   Mon, 17 Jun 2024 11:53:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 12:12:26 +0000   Mon, 17 Jun 2024 12:01:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.199
	  Hostname:    embed-certs-136195
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1899a7a26ff4dfea374ed2fa1ef0511
	  System UUID:                f1899a7a-26ff-4dfe-a374-ed2fa1ef0511
	  Boot ID:                    6cf9c77d-8415-4e84-a4b7-6d0c2ee58ca7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-7db6d8ff4d-9bbjg                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-embed-certs-136195                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-embed-certs-136195             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-embed-certs-136195    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-25d5n                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-embed-certs-136195             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-569cc877fc-dmhfs               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-136195 status is now: NodeHasSufficientMemory
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-136195 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-136195 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-136195 status is now: NodeHasSufficientPID
	  Normal  NodeReady                21m                kubelet          Node embed-certs-136195 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-136195 event: Registered Node embed-certs-136195 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-136195 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-136195 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-136195 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-136195 event: Registered Node embed-certs-136195 in Controller
	
	
	==> dmesg <==
	[Jun17 12:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051624] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040263] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.519435] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.417283] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.586844] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.346490] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.060823] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058122] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.162800] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.140484] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.293567] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.430464] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.055705] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.666552] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +5.648221] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.337366] systemd-fstab-generator[1609]: Ignoring "noauto" option for root device
	[  +3.377392] kauditd_printk_skb: 67 callbacks suppressed
	[  +6.800863] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] <==
	{"level":"info","ts":"2024-06-17T12:01:39.817883Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-17T12:01:39.818886Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3eb84dda48bf146c","initial-advertise-peer-urls":["https://192.168.72.199:2380"],"listen-peer-urls":["https://192.168.72.199:2380"],"advertise-client-urls":["https://192.168.72.199:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.199:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-17T12:01:39.818932Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-17T12:01:39.821101Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.199:2380"}
	{"level":"info","ts":"2024-06-17T12:01:39.821134Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.199:2380"}
	{"level":"info","ts":"2024-06-17T12:01:41.432939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3eb84dda48bf146c is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-17T12:01:41.433056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3eb84dda48bf146c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-17T12:01:41.433083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3eb84dda48bf146c received MsgPreVoteResp from 3eb84dda48bf146c at term 2"}
	{"level":"info","ts":"2024-06-17T12:01:41.433095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3eb84dda48bf146c became candidate at term 3"}
	{"level":"info","ts":"2024-06-17T12:01:41.433101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3eb84dda48bf146c received MsgVoteResp from 3eb84dda48bf146c at term 3"}
	{"level":"info","ts":"2024-06-17T12:01:41.433109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3eb84dda48bf146c became leader at term 3"}
	{"level":"info","ts":"2024-06-17T12:01:41.433116Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3eb84dda48bf146c elected leader 3eb84dda48bf146c at term 3"}
	{"level":"info","ts":"2024-06-17T12:01:41.43563Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3eb84dda48bf146c","local-member-attributes":"{Name:embed-certs-136195 ClientURLs:[https://192.168.72.199:2379]}","request-path":"/0/members/3eb84dda48bf146c/attributes","cluster-id":"6cdfe813ec7866a5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-17T12:01:41.435647Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T12:01:41.435785Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T12:01:41.435922Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-17T12:01:41.435931Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-17T12:01:41.437666Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.199:2379"}
	{"level":"info","ts":"2024-06-17T12:01:41.437677Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-06-17T12:01:57.982739Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.141929ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1471709671445730966 > lease_revoke:<id:146c90260bb053df>","response":"size:27"}
	{"level":"warn","ts":"2024-06-17T12:02:18.430382Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.010997ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-dmhfs\" ","response":"range_response_count:1 size:4283"}
	{"level":"info","ts":"2024-06-17T12:02:18.43045Z","caller":"traceutil/trace.go:171","msg":"trace[1824285193] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-dmhfs; range_end:; response_count:1; response_revision:588; }","duration":"219.126134ms","start":"2024-06-17T12:02:18.2113Z","end":"2024-06-17T12:02:18.430426Z","steps":["trace[1824285193] 'range keys from in-memory index tree'  (duration: 218.865891ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T12:11:41.464956Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":817}
	{"level":"info","ts":"2024-06-17T12:11:41.474746Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":817,"took":"9.367947ms","hash":1255471678,"current-db-size-bytes":2711552,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2711552,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-06-17T12:11:41.474815Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1255471678,"revision":817,"compact-revision":-1}
	
	
	==> kernel <==
	 12:15:11 up 13 min,  0 users,  load average: 0.14, 0.21, 0.16
	Linux embed-certs-136195 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] <==
	I0617 12:09:43.805566       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:11:42.806312       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:11:42.806437       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0617 12:11:43.806691       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:11:43.806839       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:11:43.806867       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:11:43.806968       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:11:43.807061       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0617 12:11:43.808222       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:12:43.807104       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:12:43.807237       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:12:43.807264       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:12:43.808455       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:12:43.808507       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0617 12:12:43.808533       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:14:43.808363       1 handler_proxy.go:93] no RequestInfo found in the context
	W0617 12:14:43.808713       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:14:43.808769       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0617 12:14:43.808795       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0617 12:14:43.808943       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:14:43.810641       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] <==
	I0617 12:09:26.812516       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:09:56.341452       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:09:56.821786       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:10:26.346872       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:10:26.829511       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:10:56.351342       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:10:56.840118       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:11:26.357358       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:11:26.847956       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:11:56.362662       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:11:56.858084       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:12:26.367549       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:12:26.866625       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0617 12:12:47.331275       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="248.478µs"
	E0617 12:12:56.372686       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:12:56.874687       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0617 12:13:01.328765       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="362.251µs"
	E0617 12:13:26.378612       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:13:26.884448       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:13:56.384660       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:13:56.893307       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:14:26.390884       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:14:26.901382       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:14:56.396145       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:14:56.908649       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] <==
	I0617 12:01:43.995657       1 server_linux.go:69] "Using iptables proxy"
	I0617 12:01:44.005891       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.199"]
	I0617 12:01:44.060503       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0617 12:01:44.062664       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0617 12:01:44.062800       1 server_linux.go:165] "Using iptables Proxier"
	I0617 12:01:44.066148       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 12:01:44.066391       1 server.go:872] "Version info" version="v1.30.1"
	I0617 12:01:44.066423       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 12:01:44.067727       1 config.go:192] "Starting service config controller"
	I0617 12:01:44.067817       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 12:01:44.067855       1 config.go:101] "Starting endpoint slice config controller"
	I0617 12:01:44.067860       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0617 12:01:44.068581       1 config.go:319] "Starting node config controller"
	I0617 12:01:44.068609       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 12:01:44.168913       1 shared_informer.go:320] Caches are synced for node config
	I0617 12:01:44.168955       1 shared_informer.go:320] Caches are synced for service config
	I0617 12:01:44.169026       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] <==
	I0617 12:01:40.220617       1 serving.go:380] Generated self-signed cert in-memory
	W0617 12:01:42.752225       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0617 12:01:42.752321       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 12:01:42.752333       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0617 12:01:42.752339       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0617 12:01:42.798687       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0617 12:01:42.798727       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 12:01:42.800748       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0617 12:01:42.800862       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0617 12:01:42.800891       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0617 12:01:42.802067       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0617 12:01:42.901441       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 17 12:12:38 embed-certs-136195 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 12:12:38 embed-certs-136195 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 12:12:38 embed-certs-136195 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 12:12:47 embed-certs-136195 kubelet[940]: E0617 12:12:47.316872     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:13:01 embed-certs-136195 kubelet[940]: E0617 12:13:01.315652     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:13:12 embed-certs-136195 kubelet[940]: E0617 12:13:12.317393     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:13:23 embed-certs-136195 kubelet[940]: E0617 12:13:23.317329     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:13:34 embed-certs-136195 kubelet[940]: E0617 12:13:34.317643     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:13:38 embed-certs-136195 kubelet[940]: E0617 12:13:38.338821     940 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 12:13:38 embed-certs-136195 kubelet[940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 12:13:38 embed-certs-136195 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 12:13:38 embed-certs-136195 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 12:13:38 embed-certs-136195 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 12:13:49 embed-certs-136195 kubelet[940]: E0617 12:13:49.316693     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:14:03 embed-certs-136195 kubelet[940]: E0617 12:14:03.315812     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:14:18 embed-certs-136195 kubelet[940]: E0617 12:14:18.316047     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:14:30 embed-certs-136195 kubelet[940]: E0617 12:14:30.316201     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:14:38 embed-certs-136195 kubelet[940]: E0617 12:14:38.341642     940 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 12:14:38 embed-certs-136195 kubelet[940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 12:14:38 embed-certs-136195 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 12:14:38 embed-certs-136195 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 12:14:38 embed-certs-136195 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 12:14:45 embed-certs-136195 kubelet[940]: E0617 12:14:45.315638     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:14:56 embed-certs-136195 kubelet[940]: E0617 12:14:56.316374     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:15:07 embed-certs-136195 kubelet[940]: E0617 12:15:07.315644     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	
	
	==> storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] <==
	I0617 12:01:44.533035       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0617 12:01:44.544756       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0617 12:01:44.544965       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0617 12:02:01.950535       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0617 12:02:01.950777       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-136195_206e0fc6-44a6-4e2b-90d8-19619e77516b!
	I0617 12:02:01.952621       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eaa2d4c6-0454-437c-9a6d-480f4e6de3d9", APIVersion:"v1", ResourceVersion:"565", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-136195_206e0fc6-44a6-4e2b-90d8-19619e77516b became leader
	I0617 12:02:02.051343       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-136195_206e0fc6-44a6-4e2b-90d8-19619e77516b!
	
	
	==> storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] <==
	I0617 12:01:43.917807       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0617 12:01:43.921755       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-136195 -n embed-certs-136195
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-136195 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-dmhfs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-136195 describe pod metrics-server-569cc877fc-dmhfs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-136195 describe pod metrics-server-569cc877fc-dmhfs: exit status 1 (62.222293ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-dmhfs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-136195 describe pod metrics-server-569cc877fc-dmhfs: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-06-17 12:15:52.857055905 +0000 UTC m=+5498.700614050
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-991309 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-991309 logs -n 25: (2.029915012s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-514753                              | cert-expiration-514753       | jenkins | v1.33.1 | 17 Jun 24 11:52 UTC | 17 Jun 24 11:52 UTC |
	| start   | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:52 UTC | 17 Jun 24 11:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-152830             | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-152830                                   | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-136195            | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:55 UTC |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	| delete  | -p                                                     | disable-driver-mounts-960277 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | disable-driver-mounts-960277                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:56 UTC |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-152830                  | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-152830                                   | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC | 17 Jun 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-136195                 | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-003661        | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC | 17 Jun 24 12:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-991309  | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:57 UTC | 17 Jun 24 11:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:57 UTC |                     |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-003661                              | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC | 17 Jun 24 11:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-003661             | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC | 17 Jun 24 11:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-003661                              | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-991309       | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:59 UTC | 17 Jun 24 12:06 UTC |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 11:59:37
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 11:59:37.428028  166103 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:59:37.428266  166103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:59:37.428274  166103 out.go:304] Setting ErrFile to fd 2...
	I0617 11:59:37.428279  166103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:59:37.428472  166103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:59:37.429026  166103 out.go:298] Setting JSON to false
	I0617 11:59:37.429968  166103 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6124,"bootTime":1718619453,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:59:37.430026  166103 start.go:139] virtualization: kvm guest
	I0617 11:59:37.432171  166103 out.go:177] * [default-k8s-diff-port-991309] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:59:37.433521  166103 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:59:37.433548  166103 notify.go:220] Checking for updates...
	I0617 11:59:37.434850  166103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:59:37.436099  166103 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:59:37.437362  166103 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:59:37.438535  166103 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:59:37.439644  166103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:59:37.441113  166103 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:59:37.441563  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:59:37.441645  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:59:37.456875  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45565
	I0617 11:59:37.457306  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:59:37.457839  166103 main.go:141] libmachine: Using API Version  1
	I0617 11:59:37.457861  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:59:37.458188  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:59:37.458381  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 11:59:37.458626  166103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:59:37.458927  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:59:37.458971  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:59:37.474024  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45165
	I0617 11:59:37.474411  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:59:37.474873  166103 main.go:141] libmachine: Using API Version  1
	I0617 11:59:37.474899  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:59:37.475199  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:59:37.475383  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 11:59:37.507955  166103 out.go:177] * Using the kvm2 driver based on existing profile
	I0617 11:59:37.509134  166103 start.go:297] selected driver: kvm2
	I0617 11:59:37.509148  166103 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:59:37.509249  166103 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:59:37.509927  166103 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:59:37.510004  166103 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 11:59:37.525340  166103 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 11:59:37.525701  166103 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:59:37.525761  166103 cni.go:84] Creating CNI manager for ""
	I0617 11:59:37.525779  166103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:59:37.525812  166103 start.go:340] cluster config:
	{Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:59:37.525910  166103 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:59:37.527756  166103 out.go:177] * Starting "default-k8s-diff-port-991309" primary control-plane node in "default-k8s-diff-port-991309" cluster
	I0617 11:59:36.391800  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:37.529104  166103 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:59:37.529159  166103 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 11:59:37.529171  166103 cache.go:56] Caching tarball of preloaded images
	I0617 11:59:37.529246  166103 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:59:37.529256  166103 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 11:59:37.529368  166103 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/config.json ...
	I0617 11:59:37.529565  166103 start.go:360] acquireMachinesLock for default-k8s-diff-port-991309: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:59:42.471684  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:45.543735  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:51.623725  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:54.695811  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:00.775775  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:03.847736  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:09.927768  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:12.999728  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:19.079809  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:22.151737  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:28.231763  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:31.303775  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:37.383783  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:40.455809  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:46.535757  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:49.607769  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:55.687772  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:58.759722  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:01:04.839736  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:01:07.911780  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:01:10.916735  165060 start.go:364] duration metric: took 4m27.471308215s to acquireMachinesLock for "embed-certs-136195"
	I0617 12:01:10.916814  165060 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:10.916827  165060 fix.go:54] fixHost starting: 
	I0617 12:01:10.917166  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:10.917203  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:10.932217  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43235
	I0617 12:01:10.932742  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:10.933241  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:10.933261  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:10.933561  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:10.933766  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:10.933939  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:10.935452  165060 fix.go:112] recreateIfNeeded on embed-certs-136195: state=Stopped err=<nil>
	I0617 12:01:10.935660  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	W0617 12:01:10.935831  165060 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:10.937510  165060 out.go:177] * Restarting existing kvm2 VM for "embed-certs-136195" ...
	I0617 12:01:10.938708  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Start
	I0617 12:01:10.938873  165060 main.go:141] libmachine: (embed-certs-136195) Ensuring networks are active...
	I0617 12:01:10.939602  165060 main.go:141] libmachine: (embed-certs-136195) Ensuring network default is active
	I0617 12:01:10.939896  165060 main.go:141] libmachine: (embed-certs-136195) Ensuring network mk-embed-certs-136195 is active
	I0617 12:01:10.940260  165060 main.go:141] libmachine: (embed-certs-136195) Getting domain xml...
	I0617 12:01:10.940881  165060 main.go:141] libmachine: (embed-certs-136195) Creating domain...
	I0617 12:01:12.136267  165060 main.go:141] libmachine: (embed-certs-136195) Waiting to get IP...
	I0617 12:01:12.137303  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:12.137692  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:12.137777  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:12.137684  166451 retry.go:31] will retry after 261.567272ms: waiting for machine to come up
	I0617 12:01:12.401390  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:12.401845  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:12.401873  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:12.401816  166451 retry.go:31] will retry after 332.256849ms: waiting for machine to come up
	I0617 12:01:12.735421  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:12.735842  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:12.735872  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:12.735783  166451 retry.go:31] will retry after 457.313241ms: waiting for machine to come up
	I0617 12:01:13.194621  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:13.195073  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:13.195091  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:13.195036  166451 retry.go:31] will retry after 539.191177ms: waiting for machine to come up
	I0617 12:01:10.914315  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:10.914353  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:01:10.914690  164809 buildroot.go:166] provisioning hostname "no-preload-152830"
	I0617 12:01:10.914716  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:01:10.914905  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:01:10.916557  164809 machine.go:97] duration metric: took 4m37.418351206s to provisionDockerMachine
	I0617 12:01:10.916625  164809 fix.go:56] duration metric: took 4m37.438694299s for fixHost
	I0617 12:01:10.916634  164809 start.go:83] releasing machines lock for "no-preload-152830", held for 4m37.438726092s
	W0617 12:01:10.916653  164809 start.go:713] error starting host: provision: host is not running
	W0617 12:01:10.916750  164809 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0617 12:01:10.916763  164809 start.go:728] Will try again in 5 seconds ...
	I0617 12:01:13.735708  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:13.736155  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:13.736184  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:13.736096  166451 retry.go:31] will retry after 754.965394ms: waiting for machine to come up
	I0617 12:01:14.493211  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:14.493598  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:14.493628  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:14.493544  166451 retry.go:31] will retry after 786.125188ms: waiting for machine to come up
	I0617 12:01:15.281505  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:15.281975  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:15.282008  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:15.281939  166451 retry.go:31] will retry after 1.091514617s: waiting for machine to come up
	I0617 12:01:16.375391  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:16.375904  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:16.375935  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:16.375820  166451 retry.go:31] will retry after 1.34601641s: waiting for machine to come up
	I0617 12:01:17.724108  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:17.724453  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:17.724477  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:17.724418  166451 retry.go:31] will retry after 1.337616605s: waiting for machine to come up
	I0617 12:01:15.918256  164809 start.go:360] acquireMachinesLock for no-preload-152830: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 12:01:19.063677  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:19.064210  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:19.064243  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:19.064144  166451 retry.go:31] will retry after 1.914267639s: waiting for machine to come up
	I0617 12:01:20.979644  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:20.980124  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:20.980150  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:20.980072  166451 retry.go:31] will retry after 2.343856865s: waiting for machine to come up
	I0617 12:01:23.326506  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:23.326878  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:23.326922  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:23.326861  166451 retry.go:31] will retry after 2.450231017s: waiting for machine to come up
	I0617 12:01:25.780501  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:25.780886  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:25.780913  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:25.780825  166451 retry.go:31] will retry after 3.591107926s: waiting for machine to come up
	I0617 12:01:30.728529  165698 start.go:364] duration metric: took 3m12.647041864s to acquireMachinesLock for "old-k8s-version-003661"
	I0617 12:01:30.728602  165698 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:30.728613  165698 fix.go:54] fixHost starting: 
	I0617 12:01:30.729036  165698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:30.729090  165698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:30.746528  165698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0617 12:01:30.746982  165698 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:30.747493  165698 main.go:141] libmachine: Using API Version  1
	I0617 12:01:30.747516  165698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:30.747847  165698 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:30.748060  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:30.748186  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetState
	I0617 12:01:30.750035  165698 fix.go:112] recreateIfNeeded on old-k8s-version-003661: state=Stopped err=<nil>
	I0617 12:01:30.750072  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	W0617 12:01:30.750206  165698 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:30.752196  165698 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-003661" ...
	I0617 12:01:29.375875  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.376372  165060 main.go:141] libmachine: (embed-certs-136195) Found IP for machine: 192.168.72.199
	I0617 12:01:29.376407  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has current primary IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.376430  165060 main.go:141] libmachine: (embed-certs-136195) Reserving static IP address...
	I0617 12:01:29.376754  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "embed-certs-136195", mac: "52:54:00:f2:27:84", ip: "192.168.72.199"} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.376788  165060 main.go:141] libmachine: (embed-certs-136195) Reserved static IP address: 192.168.72.199
	I0617 12:01:29.376800  165060 main.go:141] libmachine: (embed-certs-136195) DBG | skip adding static IP to network mk-embed-certs-136195 - found existing host DHCP lease matching {name: "embed-certs-136195", mac: "52:54:00:f2:27:84", ip: "192.168.72.199"}
	I0617 12:01:29.376811  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Getting to WaitForSSH function...
	I0617 12:01:29.376820  165060 main.go:141] libmachine: (embed-certs-136195) Waiting for SSH to be available...
	I0617 12:01:29.378811  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.379121  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.379151  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.379289  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Using SSH client type: external
	I0617 12:01:29.379321  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa (-rw-------)
	I0617 12:01:29.379354  165060 main.go:141] libmachine: (embed-certs-136195) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.199 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:01:29.379368  165060 main.go:141] libmachine: (embed-certs-136195) DBG | About to run SSH command:
	I0617 12:01:29.379381  165060 main.go:141] libmachine: (embed-certs-136195) DBG | exit 0
	I0617 12:01:29.503819  165060 main.go:141] libmachine: (embed-certs-136195) DBG | SSH cmd err, output: <nil>: 
	I0617 12:01:29.504207  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetConfigRaw
	I0617 12:01:29.504827  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:29.507277  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.507601  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.507635  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.507878  165060 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/config.json ...
	I0617 12:01:29.508102  165060 machine.go:94] provisionDockerMachine start ...
	I0617 12:01:29.508125  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:29.508333  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.510390  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.510636  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.510656  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.510761  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:29.510924  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.511082  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.511242  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:29.511404  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:29.511665  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:29.511680  165060 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:01:29.611728  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:01:29.611759  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetMachineName
	I0617 12:01:29.611996  165060 buildroot.go:166] provisioning hostname "embed-certs-136195"
	I0617 12:01:29.612025  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetMachineName
	I0617 12:01:29.612194  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.614719  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.615085  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.615110  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.615251  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:29.615425  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.615565  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.615685  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:29.615881  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:29.616066  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:29.616084  165060 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-136195 && echo "embed-certs-136195" | sudo tee /etc/hostname
	I0617 12:01:29.729321  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-136195
	
	I0617 12:01:29.729347  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.731968  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.732314  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.732352  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.732582  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:29.732820  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.733001  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.733157  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:29.733312  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:29.733471  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:29.733487  165060 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-136195' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-136195/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-136195' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:01:29.840083  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:29.840110  165060 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:01:29.840145  165060 buildroot.go:174] setting up certificates
	I0617 12:01:29.840180  165060 provision.go:84] configureAuth start
	I0617 12:01:29.840199  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetMachineName
	I0617 12:01:29.840488  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:29.843096  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.843446  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.843487  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.843687  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.845627  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.845914  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.845940  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.846021  165060 provision.go:143] copyHostCerts
	I0617 12:01:29.846096  165060 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:01:29.846106  165060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:01:29.846171  165060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:01:29.846267  165060 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:01:29.846275  165060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:01:29.846298  165060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:01:29.846359  165060 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:01:29.846366  165060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:01:29.846387  165060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:01:29.846456  165060 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.embed-certs-136195 san=[127.0.0.1 192.168.72.199 embed-certs-136195 localhost minikube]
	I0617 12:01:30.076596  165060 provision.go:177] copyRemoteCerts
	I0617 12:01:30.076657  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:01:30.076686  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.079269  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.079565  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.079588  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.079785  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.080016  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.080189  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.080316  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.161615  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:01:30.188790  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0617 12:01:30.215171  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:01:30.241310  165060 provision.go:87] duration metric: took 401.115469ms to configureAuth
	I0617 12:01:30.241332  165060 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:01:30.241529  165060 config.go:182] Loaded profile config "embed-certs-136195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:01:30.241602  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.244123  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.244427  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.244459  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.244584  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.244793  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.244999  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.245174  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.245340  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:30.245497  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:30.245512  165060 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:01:30.498156  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:01:30.498189  165060 machine.go:97] duration metric: took 990.071076ms to provisionDockerMachine
	I0617 12:01:30.498201  165060 start.go:293] postStartSetup for "embed-certs-136195" (driver="kvm2")
	I0617 12:01:30.498214  165060 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:01:30.498238  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.498580  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:01:30.498605  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.501527  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.501912  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.501941  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.502054  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.502257  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.502423  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.502578  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.583151  165060 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:01:30.587698  165060 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:01:30.587722  165060 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:01:30.587819  165060 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:01:30.587940  165060 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:01:30.588078  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:01:30.598234  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:30.622580  165060 start.go:296] duration metric: took 124.363651ms for postStartSetup
	I0617 12:01:30.622621  165060 fix.go:56] duration metric: took 19.705796191s for fixHost
	I0617 12:01:30.622645  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.625226  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.625637  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.625684  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.625821  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.626040  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.626229  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.626418  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.626613  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:30.626839  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:30.626862  165060 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:01:30.728365  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625690.704643527
	
	I0617 12:01:30.728389  165060 fix.go:216] guest clock: 1718625690.704643527
	I0617 12:01:30.728396  165060 fix.go:229] Guest: 2024-06-17 12:01:30.704643527 +0000 UTC Remote: 2024-06-17 12:01:30.622625631 +0000 UTC m=+287.310804086 (delta=82.017896ms)
	I0617 12:01:30.728416  165060 fix.go:200] guest clock delta is within tolerance: 82.017896ms
	I0617 12:01:30.728421  165060 start.go:83] releasing machines lock for "embed-certs-136195", held for 19.811634749s
	I0617 12:01:30.728445  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.728763  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:30.731414  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.731784  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.731816  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.731937  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.732504  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.732704  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.732761  165060 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:01:30.732826  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.732964  165060 ssh_runner.go:195] Run: cat /version.json
	I0617 12:01:30.732991  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.735854  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736049  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736278  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.736310  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.736334  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736397  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736579  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.736653  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.736777  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.736959  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.736972  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.737131  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.737188  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.737356  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.844295  165060 ssh_runner.go:195] Run: systemctl --version
	I0617 12:01:30.851958  165060 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:01:31.000226  165060 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:01:31.008322  165060 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:01:31.008397  165060 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:01:31.029520  165060 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:01:31.029547  165060 start.go:494] detecting cgroup driver to use...
	I0617 12:01:31.029617  165060 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:01:31.045505  165060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:01:31.059851  165060 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:01:31.059920  165060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:01:31.075011  165060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:01:31.089705  165060 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:01:31.204300  165060 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:01:31.342204  165060 docker.go:233] disabling docker service ...
	I0617 12:01:31.342290  165060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:01:31.356945  165060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:01:31.369786  165060 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:01:31.505817  165060 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:01:31.631347  165060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:01:31.646048  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:01:31.664854  165060 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:01:31.664923  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.677595  165060 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:01:31.677678  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.690164  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.701482  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.712488  165060 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:01:31.723994  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.736805  165060 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.755001  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.767226  165060 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:01:31.777894  165060 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:01:31.777954  165060 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:01:31.792644  165060 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:01:31.803267  165060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:31.920107  165060 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:01:32.067833  165060 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:01:32.067904  165060 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:01:32.072818  165060 start.go:562] Will wait 60s for crictl version
	I0617 12:01:32.072881  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:01:32.076782  165060 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:01:32.116635  165060 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:01:32.116709  165060 ssh_runner.go:195] Run: crio --version
	I0617 12:01:32.148094  165060 ssh_runner.go:195] Run: crio --version
	I0617 12:01:32.176924  165060 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:01:30.753437  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .Start
	I0617 12:01:30.753608  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring networks are active...
	I0617 12:01:30.754272  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring network default is active
	I0617 12:01:30.754600  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring network mk-old-k8s-version-003661 is active
	I0617 12:01:30.754967  165698 main.go:141] libmachine: (old-k8s-version-003661) Getting domain xml...
	I0617 12:01:30.755739  165698 main.go:141] libmachine: (old-k8s-version-003661) Creating domain...
	I0617 12:01:32.029080  165698 main.go:141] libmachine: (old-k8s-version-003661) Waiting to get IP...
	I0617 12:01:32.029902  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.030401  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.030477  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.030384  166594 retry.go:31] will retry after 191.846663ms: waiting for machine to come up
	I0617 12:01:32.223912  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.224300  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.224328  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.224276  166594 retry.go:31] will retry after 341.806498ms: waiting for machine to come up
	I0617 12:01:32.568066  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.568648  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.568682  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.568575  166594 retry.go:31] will retry after 359.779948ms: waiting for machine to come up
	I0617 12:01:32.930210  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.930652  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.930675  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.930604  166594 retry.go:31] will retry after 548.549499ms: waiting for machine to come up
	I0617 12:01:32.178076  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:32.181127  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:32.181524  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:32.181553  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:32.181778  165060 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0617 12:01:32.186998  165060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:32.203033  165060 kubeadm.go:877] updating cluster {Name:embed-certs-136195 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-136195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:01:32.203142  165060 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:01:32.203183  165060 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:32.245712  165060 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:01:32.245796  165060 ssh_runner.go:195] Run: which lz4
	I0617 12:01:32.250113  165060 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:01:32.254486  165060 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:01:32.254511  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0617 12:01:33.480493  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:33.480965  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:33.481004  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:33.480931  166594 retry.go:31] will retry after 636.044066ms: waiting for machine to come up
	I0617 12:01:34.118880  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:34.119361  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:34.119394  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:34.119299  166594 retry.go:31] will retry after 637.085777ms: waiting for machine to come up
	I0617 12:01:34.757614  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:34.758097  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:34.758126  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:34.758051  166594 retry.go:31] will retry after 921.652093ms: waiting for machine to come up
	I0617 12:01:35.681846  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:35.682324  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:35.682351  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:35.682269  166594 retry.go:31] will retry after 1.1106801s: waiting for machine to come up
	I0617 12:01:36.794411  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:36.794845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:36.794869  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:36.794793  166594 retry.go:31] will retry after 1.323395845s: waiting for machine to come up
	I0617 12:01:33.776867  165060 crio.go:462] duration metric: took 1.526763522s to copy over tarball
	I0617 12:01:33.776955  165060 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:01:35.994216  165060 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.217222149s)
	I0617 12:01:35.994246  165060 crio.go:469] duration metric: took 2.217348025s to extract the tarball
	I0617 12:01:35.994255  165060 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:01:36.034978  165060 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:36.087255  165060 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 12:01:36.087281  165060 cache_images.go:84] Images are preloaded, skipping loading
	I0617 12:01:36.087291  165060 kubeadm.go:928] updating node { 192.168.72.199 8443 v1.30.1 crio true true} ...
	I0617 12:01:36.087447  165060 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-136195 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-136195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:01:36.087551  165060 ssh_runner.go:195] Run: crio config
	I0617 12:01:36.130409  165060 cni.go:84] Creating CNI manager for ""
	I0617 12:01:36.130433  165060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:36.130449  165060 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:01:36.130479  165060 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.199 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-136195 NodeName:embed-certs-136195 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:01:36.130633  165060 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-136195"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.199
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.199"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:01:36.130724  165060 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:01:36.141027  165060 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:01:36.141110  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:01:36.150748  165060 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0617 12:01:36.167282  165060 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:01:36.183594  165060 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0617 12:01:36.202494  165060 ssh_runner.go:195] Run: grep 192.168.72.199	control-plane.minikube.internal$ /etc/hosts
	I0617 12:01:36.206515  165060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:36.218598  165060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:36.344280  165060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:36.361127  165060 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195 for IP: 192.168.72.199
	I0617 12:01:36.361152  165060 certs.go:194] generating shared ca certs ...
	I0617 12:01:36.361172  165060 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:36.361370  165060 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:01:36.361425  165060 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:01:36.361438  165060 certs.go:256] generating profile certs ...
	I0617 12:01:36.361557  165060 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/client.key
	I0617 12:01:36.361648  165060 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/apiserver.key.f7068429
	I0617 12:01:36.361696  165060 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/proxy-client.key
	I0617 12:01:36.361863  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:01:36.361913  165060 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:01:36.361925  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:01:36.361951  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:01:36.361984  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:01:36.362005  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:01:36.362041  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:36.362770  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:01:36.397257  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:01:36.422523  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:01:36.451342  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:01:36.485234  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0617 12:01:36.514351  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:01:36.544125  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:01:36.567574  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:01:36.590417  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:01:36.613174  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:01:36.636187  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:01:36.659365  165060 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:01:36.675981  165060 ssh_runner.go:195] Run: openssl version
	I0617 12:01:36.681694  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:01:36.692324  165060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:01:36.696871  165060 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:01:36.696938  165060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:01:36.702794  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:01:36.713372  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:01:36.724054  165060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:01:36.728505  165060 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:01:36.728566  165060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:01:36.734082  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:01:36.744542  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:01:36.755445  165060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:36.759880  165060 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:36.759922  165060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:36.765367  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:01:36.776234  165060 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:01:36.780822  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:01:36.786895  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:01:36.793358  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:01:36.800187  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:01:36.806591  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:01:36.812681  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:01:36.818814  165060 kubeadm.go:391] StartCluster: {Name:embed-certs-136195 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-136195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:01:36.818903  165060 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:01:36.818945  165060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:36.861839  165060 cri.go:89] found id: ""
	I0617 12:01:36.861920  165060 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:01:36.873500  165060 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:01:36.873529  165060 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:01:36.873551  165060 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:01:36.873602  165060 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:01:36.884767  165060 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:01:36.886013  165060 kubeconfig.go:125] found "embed-certs-136195" server: "https://192.168.72.199:8443"
	I0617 12:01:36.888144  165060 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:01:36.899204  165060 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.199
	I0617 12:01:36.899248  165060 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:01:36.899263  165060 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:01:36.899325  165060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:36.941699  165060 cri.go:89] found id: ""
	I0617 12:01:36.941782  165060 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:01:36.960397  165060 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:01:36.971254  165060 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:01:36.971276  165060 kubeadm.go:156] found existing configuration files:
	
	I0617 12:01:36.971333  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:01:36.981367  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:01:36.981448  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:01:36.991878  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:01:37.001741  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:01:37.001816  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:01:37.012170  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:01:37.021914  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:01:37.021979  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:01:37.031866  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:01:37.041657  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:01:37.041706  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:01:37.051440  165060 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:01:37.062543  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:37.175190  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:37.872053  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:38.085732  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:38.146895  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:38.208633  165060 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:01:38.208898  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:38.119805  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:38.297858  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:38.297905  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:38.120293  166594 retry.go:31] will retry after 1.769592858s: waiting for machine to come up
	I0617 12:01:39.892495  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:39.893035  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:39.893065  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:39.892948  166594 retry.go:31] will retry after 1.954570801s: waiting for machine to come up
	I0617 12:01:41.849587  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:41.850111  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:41.850140  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:41.850067  166594 retry.go:31] will retry after 3.44879626s: waiting for machine to come up
	I0617 12:01:38.708936  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:39.209014  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:39.709765  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:39.728309  165060 api_server.go:72] duration metric: took 1.519672652s to wait for apiserver process to appear ...
	I0617 12:01:39.728342  165060 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:01:39.728369  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:42.756054  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:01:42.756089  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:01:42.756105  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:42.797646  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:01:42.797689  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:01:43.229201  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:43.233440  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:01:43.233467  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:01:43.728490  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:43.741000  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:01:43.741037  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:01:44.228634  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:44.232839  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 200:
	ok
	I0617 12:01:44.238582  165060 api_server.go:141] control plane version: v1.30.1
	I0617 12:01:44.238606  165060 api_server.go:131] duration metric: took 4.510256755s to wait for apiserver health ...
	I0617 12:01:44.238615  165060 cni.go:84] Creating CNI manager for ""
	I0617 12:01:44.238622  165060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:44.240569  165060 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:01:44.241963  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:01:44.253143  165060 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:01:44.286772  165060 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:01:44.295697  165060 system_pods.go:59] 8 kube-system pods found
	I0617 12:01:44.295736  165060 system_pods.go:61] "coredns-7db6d8ff4d-9bbjg" [1ba0eee5-436e-4c83-b5ce-3c907d66b641] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0617 12:01:44.295744  165060 system_pods.go:61] "etcd-embed-certs-136195" [6dc81a80-c56b-4517-af82-c450cf9578f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0617 12:01:44.295757  165060 system_pods.go:61] "kube-apiserver-embed-certs-136195" [bd61a715-2471-4dca-aa48-a157531ebd6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 12:01:44.295763  165060 system_pods.go:61] "kube-controller-manager-embed-certs-136195" [194db4b0-75c2-4905-8e4d-813185497b51] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 12:01:44.295768  165060 system_pods.go:61] "kube-proxy-25d5n" [52b6d09a-899f-40c4-b1f3-7842ae755165] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0617 12:01:44.295774  165060 system_pods.go:61] "kube-scheduler-embed-certs-136195" [b04d3798-f465-4f82-9ec7-777ea62d5b94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 12:01:44.295782  165060 system_pods.go:61] "metrics-server-569cc877fc-dmhfs" [31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:01:44.295788  165060 system_pods.go:61] "storage-provisioner" [4b04a38a-5006-4496-a24d-0940029193de] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 12:01:44.295797  165060 system_pods.go:74] duration metric: took 9.004741ms to wait for pod list to return data ...
	I0617 12:01:44.295811  165060 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:01:44.298934  165060 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:01:44.298968  165060 node_conditions.go:123] node cpu capacity is 2
	I0617 12:01:44.298989  165060 node_conditions.go:105] duration metric: took 3.172465ms to run NodePressure ...
	I0617 12:01:44.299027  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:44.565943  165060 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 12:01:44.570796  165060 kubeadm.go:733] kubelet initialised
	I0617 12:01:44.570825  165060 kubeadm.go:734] duration metric: took 4.851024ms waiting for restarted kubelet to initialise ...
	I0617 12:01:44.570836  165060 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:01:44.575565  165060 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.582180  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.582209  165060 pod_ready.go:81] duration metric: took 6.620747ms for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.582221  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.582231  165060 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.586828  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "etcd-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.586850  165060 pod_ready.go:81] duration metric: took 4.61059ms for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.586859  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "etcd-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.586866  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.591162  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.591189  165060 pod_ready.go:81] duration metric: took 4.316651ms for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.591197  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.591204  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.690269  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.690301  165060 pod_ready.go:81] duration metric: took 99.088803ms for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.690310  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.690317  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:45.089616  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-proxy-25d5n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.089640  165060 pod_ready.go:81] duration metric: took 399.31511ms for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:45.089649  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-proxy-25d5n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.089656  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:45.491031  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.491058  165060 pod_ready.go:81] duration metric: took 401.395966ms for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:45.491068  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.491074  165060 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:45.890606  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.890633  165060 pod_ready.go:81] duration metric: took 399.550946ms for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:45.890644  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.890650  165060 pod_ready.go:38] duration metric: took 1.319802914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:01:45.890669  165060 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:01:45.903900  165060 ops.go:34] apiserver oom_adj: -16
	I0617 12:01:45.903936  165060 kubeadm.go:591] duration metric: took 9.03037731s to restartPrimaryControlPlane
	I0617 12:01:45.903950  165060 kubeadm.go:393] duration metric: took 9.085142288s to StartCluster
	I0617 12:01:45.903974  165060 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:45.904063  165060 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:01:45.905636  165060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:45.905908  165060 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:01:45.907817  165060 out.go:177] * Verifying Kubernetes components...
	I0617 12:01:45.905981  165060 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:01:45.907852  165060 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-136195"
	I0617 12:01:45.907880  165060 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-136195"
	W0617 12:01:45.907890  165060 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:01:45.907903  165060 addons.go:69] Setting default-storageclass=true in profile "embed-certs-136195"
	I0617 12:01:45.906085  165060 config.go:182] Loaded profile config "embed-certs-136195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:01:45.909296  165060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:45.907923  165060 host.go:66] Checking if "embed-certs-136195" exists ...
	I0617 12:01:45.907924  165060 addons.go:69] Setting metrics-server=true in profile "embed-certs-136195"
	I0617 12:01:45.909472  165060 addons.go:234] Setting addon metrics-server=true in "embed-certs-136195"
	W0617 12:01:45.909481  165060 addons.go:243] addon metrics-server should already be in state true
	I0617 12:01:45.909506  165060 host.go:66] Checking if "embed-certs-136195" exists ...
	I0617 12:01:45.907954  165060 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-136195"
	I0617 12:01:45.909776  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.909822  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.909836  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.909861  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.909841  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.909928  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.925250  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36545
	I0617 12:01:45.925500  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38767
	I0617 12:01:45.925708  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.925929  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.926262  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.926282  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.926420  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.926445  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.926637  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.926728  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.927142  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.927171  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.927206  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.927236  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.929198  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
	I0617 12:01:45.929658  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.930137  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.930159  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.930465  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.930661  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.934085  165060 addons.go:234] Setting addon default-storageclass=true in "embed-certs-136195"
	W0617 12:01:45.934107  165060 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:01:45.934139  165060 host.go:66] Checking if "embed-certs-136195" exists ...
	I0617 12:01:45.934534  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.934579  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.944472  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I0617 12:01:45.945034  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.945712  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.945741  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.946105  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.946343  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.946673  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43225
	I0617 12:01:45.947007  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.947706  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.947725  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.948027  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.948228  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.948359  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:45.950451  165060 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:01:45.951705  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:01:45.951719  165060 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:01:45.951735  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:45.949626  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:45.951588  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43695
	I0617 12:01:45.953222  165060 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:45.954471  165060 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:01:45.952290  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.954494  165060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:01:45.954514  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:45.955079  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.955098  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.955123  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.955478  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.955718  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:45.955757  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.955924  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:45.956099  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.956106  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:45.956147  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.956374  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:45.956507  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:45.957756  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.958184  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:45.958206  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.958335  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:45.958505  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:45.958680  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:45.958825  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:45.977247  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39751
	I0617 12:01:45.977663  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.978179  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.978203  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.978524  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.978711  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.980425  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:45.980601  165060 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:01:45.980616  165060 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:01:45.980630  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:45.983633  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.984088  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:45.984105  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.984258  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:45.984377  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:45.984505  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:45.984661  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:46.093292  165060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:46.112779  165060 node_ready.go:35] waiting up to 6m0s for node "embed-certs-136195" to be "Ready" ...
	I0617 12:01:46.182239  165060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:01:46.248534  165060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:01:46.286637  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:01:46.286662  165060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:01:46.313951  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:01:46.313981  165060 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:01:46.337155  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:01:46.337186  165060 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:01:46.389025  165060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:01:46.548086  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:46.548106  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:46.548442  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:46.548461  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:46.548471  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:46.548481  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:46.548485  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:46.548727  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:46.548744  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:46.548764  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:46.554199  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:46.554218  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:46.554454  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:46.554469  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:46.554480  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.142290  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.142321  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.142629  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.142658  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.142671  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.142676  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.142692  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.142943  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.142971  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.142985  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.216339  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.216366  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.216658  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.216679  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.216690  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.216700  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.216709  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.216931  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.216967  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.216982  165060 addons.go:475] Verifying addon metrics-server=true in "embed-certs-136195"
	I0617 12:01:47.219627  165060 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0617 12:01:45.300413  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:45.300848  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:45.300878  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:45.300794  166594 retry.go:31] will retry after 3.892148485s: waiting for machine to come up
	I0617 12:01:47.220905  165060 addons.go:510] duration metric: took 1.314925386s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0617 12:01:48.116197  165060 node_ready.go:53] node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:50.500448  166103 start.go:364] duration metric: took 2m12.970832528s to acquireMachinesLock for "default-k8s-diff-port-991309"
	I0617 12:01:50.500511  166103 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:50.500534  166103 fix.go:54] fixHost starting: 
	I0617 12:01:50.500980  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:50.501018  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:50.517593  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43641
	I0617 12:01:50.518035  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:50.518600  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:01:50.518635  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:50.519051  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:50.519296  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:01:50.519502  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:01:50.521095  166103 fix.go:112] recreateIfNeeded on default-k8s-diff-port-991309: state=Stopped err=<nil>
	I0617 12:01:50.521123  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	W0617 12:01:50.521307  166103 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:50.522795  166103 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-991309" ...
	I0617 12:01:49.197189  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.197671  165698 main.go:141] libmachine: (old-k8s-version-003661) Found IP for machine: 192.168.61.164
	I0617 12:01:49.197697  165698 main.go:141] libmachine: (old-k8s-version-003661) Reserving static IP address...
	I0617 12:01:49.197714  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has current primary IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.198147  165698 main.go:141] libmachine: (old-k8s-version-003661) Reserved static IP address: 192.168.61.164
	I0617 12:01:49.198175  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "old-k8s-version-003661", mac: "52:54:00:76:66:a0", ip: "192.168.61.164"} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.198185  165698 main.go:141] libmachine: (old-k8s-version-003661) Waiting for SSH to be available...
	I0617 12:01:49.198217  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | skip adding static IP to network mk-old-k8s-version-003661 - found existing host DHCP lease matching {name: "old-k8s-version-003661", mac: "52:54:00:76:66:a0", ip: "192.168.61.164"}
	I0617 12:01:49.198227  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Getting to WaitForSSH function...
	I0617 12:01:49.200478  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.200907  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.200935  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.201088  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Using SSH client type: external
	I0617 12:01:49.201116  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa (-rw-------)
	I0617 12:01:49.201154  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:01:49.201169  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | About to run SSH command:
	I0617 12:01:49.201183  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | exit 0
	I0617 12:01:49.323763  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | SSH cmd err, output: <nil>: 
	I0617 12:01:49.324127  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetConfigRaw
	I0617 12:01:49.324835  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:49.327217  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.327628  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.327660  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.327891  165698 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/config.json ...
	I0617 12:01:49.328097  165698 machine.go:94] provisionDockerMachine start ...
	I0617 12:01:49.328120  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:49.328365  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.330587  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.330992  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.331033  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.331160  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.331324  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.331490  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.331637  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.331824  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.332037  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.332049  165698 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:01:49.432170  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:01:49.432201  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.432498  165698 buildroot.go:166] provisioning hostname "old-k8s-version-003661"
	I0617 12:01:49.432524  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.432730  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.435845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.436276  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.436317  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.436507  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.436708  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.436909  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.437074  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.437289  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.437496  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.437510  165698 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-003661 && echo "old-k8s-version-003661" | sudo tee /etc/hostname
	I0617 12:01:49.550158  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-003661
	
	I0617 12:01:49.550187  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.553141  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.553509  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.553539  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.553737  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.553943  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.554141  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.554298  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.554520  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.554759  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.554787  165698 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-003661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-003661/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-003661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:01:49.661049  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:49.661079  165698 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:01:49.661106  165698 buildroot.go:174] setting up certificates
	I0617 12:01:49.661115  165698 provision.go:84] configureAuth start
	I0617 12:01:49.661124  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.661452  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:49.664166  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.664561  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.664591  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.664723  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.666845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.667114  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.667158  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.667287  165698 provision.go:143] copyHostCerts
	I0617 12:01:49.667377  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:01:49.667387  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:01:49.667440  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:01:49.667561  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:01:49.667571  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:01:49.667594  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:01:49.667649  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:01:49.667656  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:01:49.667674  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:01:49.667722  165698 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-003661 san=[127.0.0.1 192.168.61.164 localhost minikube old-k8s-version-003661]
	I0617 12:01:49.853671  165698 provision.go:177] copyRemoteCerts
	I0617 12:01:49.853736  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:01:49.853767  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.856171  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.856540  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.856577  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.856737  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.857071  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.857220  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.857360  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:49.938626  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:01:49.964401  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0617 12:01:49.988397  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0617 12:01:50.013356  165698 provision.go:87] duration metric: took 352.227211ms to configureAuth
	I0617 12:01:50.013382  165698 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:01:50.013581  165698 config.go:182] Loaded profile config "old-k8s-version-003661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0617 12:01:50.013689  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.016168  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.016514  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.016548  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.016657  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.016847  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.017025  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.017152  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.017300  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:50.017483  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:50.017505  165698 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:01:50.280037  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:01:50.280065  165698 machine.go:97] duration metric: took 951.954687ms to provisionDockerMachine
	I0617 12:01:50.280076  165698 start.go:293] postStartSetup for "old-k8s-version-003661" (driver="kvm2")
	I0617 12:01:50.280086  165698 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:01:50.280102  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.280467  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:01:50.280506  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.283318  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.283657  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.283684  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.283874  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.284106  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.284279  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.284402  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.362452  165698 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:01:50.366699  165698 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:01:50.366726  165698 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:01:50.366788  165698 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:01:50.366878  165698 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:01:50.367004  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:01:50.376706  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:50.399521  165698 start.go:296] duration metric: took 119.43167ms for postStartSetup
	I0617 12:01:50.399558  165698 fix.go:56] duration metric: took 19.670946478s for fixHost
	I0617 12:01:50.399578  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.402079  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.402465  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.402500  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.402649  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.402835  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.402994  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.403138  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.403321  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:50.403529  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:50.403541  165698 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:01:50.500267  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625710.471154465
	
	I0617 12:01:50.500294  165698 fix.go:216] guest clock: 1718625710.471154465
	I0617 12:01:50.500304  165698 fix.go:229] Guest: 2024-06-17 12:01:50.471154465 +0000 UTC Remote: 2024-06-17 12:01:50.399561534 +0000 UTC m=+212.458541959 (delta=71.592931ms)
	I0617 12:01:50.500350  165698 fix.go:200] guest clock delta is within tolerance: 71.592931ms
	I0617 12:01:50.500355  165698 start.go:83] releasing machines lock for "old-k8s-version-003661", held for 19.771784344s
	I0617 12:01:50.500380  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.500648  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:50.503346  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.503749  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.503776  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.503974  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504536  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504676  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504750  165698 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:01:50.504801  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.504861  165698 ssh_runner.go:195] Run: cat /version.json
	I0617 12:01:50.504890  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.507577  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.507736  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508013  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.508041  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508176  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.508200  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508205  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.508335  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.508419  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.508499  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.508580  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.508691  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.508717  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.508830  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.585030  165698 ssh_runner.go:195] Run: systemctl --version
	I0617 12:01:50.612492  165698 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:01:50.765842  165698 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:01:50.773214  165698 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:01:50.773288  165698 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:01:50.793397  165698 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:01:50.793424  165698 start.go:494] detecting cgroup driver to use...
	I0617 12:01:50.793499  165698 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:01:50.811531  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:01:50.826223  165698 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:01:50.826289  165698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:01:50.840517  165698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:01:50.854788  165698 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:01:50.970328  165698 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:01:51.125815  165698 docker.go:233] disabling docker service ...
	I0617 12:01:51.125893  165698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:01:51.146368  165698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:01:51.161459  165698 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:01:51.346032  165698 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:01:51.503395  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:01:51.521021  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:01:51.543851  165698 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0617 12:01:51.543905  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.556230  165698 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:01:51.556309  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.573061  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.588663  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.601086  165698 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:01:51.617347  165698 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:01:51.634502  165698 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:01:51.634635  165698 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:01:51.652813  165698 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:01:51.665145  165698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:51.826713  165698 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:01:51.981094  165698 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:01:51.981186  165698 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:01:51.986026  165698 start.go:562] Will wait 60s for crictl version
	I0617 12:01:51.986091  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:51.990253  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:01:52.032543  165698 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:01:52.032631  165698 ssh_runner.go:195] Run: crio --version
	I0617 12:01:52.063904  165698 ssh_runner.go:195] Run: crio --version
	I0617 12:01:52.097158  165698 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0617 12:01:50.524130  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Start
	I0617 12:01:50.524321  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Ensuring networks are active...
	I0617 12:01:50.524939  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Ensuring network default is active
	I0617 12:01:50.525300  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Ensuring network mk-default-k8s-diff-port-991309 is active
	I0617 12:01:50.527342  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Getting domain xml...
	I0617 12:01:50.528126  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Creating domain...
	I0617 12:01:51.864887  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting to get IP...
	I0617 12:01:51.865835  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:51.866246  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:51.866328  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:51.866228  166802 retry.go:31] will retry after 200.163407ms: waiting for machine to come up
	I0617 12:01:52.067708  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.068164  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.068193  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:52.068119  166802 retry.go:31] will retry after 364.503903ms: waiting for machine to come up
	I0617 12:01:52.098675  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:52.102187  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:52.102572  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:52.102603  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:52.102823  165698 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0617 12:01:52.107573  165698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:52.121312  165698 kubeadm.go:877] updating cluster {Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.164 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:01:52.121448  165698 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0617 12:01:52.121515  165698 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:52.181796  165698 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0617 12:01:52.181891  165698 ssh_runner.go:195] Run: which lz4
	I0617 12:01:52.186827  165698 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:01:52.191806  165698 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:01:52.191875  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0617 12:01:50.116573  165060 node_ready.go:53] node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:52.122162  165060 node_ready.go:53] node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:53.117556  165060 node_ready.go:49] node "embed-certs-136195" has status "Ready":"True"
	I0617 12:01:53.117589  165060 node_ready.go:38] duration metric: took 7.004769746s for node "embed-certs-136195" to be "Ready" ...
	I0617 12:01:53.117598  165060 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:01:53.125606  165060 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:53.131618  165060 pod_ready.go:92] pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:53.131643  165060 pod_ready.go:81] duration metric: took 6.000929ms for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:53.131654  165060 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:52.434791  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.435584  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.435740  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:52.435665  166802 retry.go:31] will retry after 486.514518ms: waiting for machine to come up
	I0617 12:01:52.924190  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.924819  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.924845  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:52.924681  166802 retry.go:31] will retry after 520.971301ms: waiting for machine to come up
	I0617 12:01:53.447437  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:53.447965  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:53.447995  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:53.447919  166802 retry.go:31] will retry after 622.761044ms: waiting for machine to come up
	I0617 12:01:54.072700  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.073170  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.073202  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:54.073112  166802 retry.go:31] will retry after 671.940079ms: waiting for machine to come up
	I0617 12:01:54.746830  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.747342  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.747372  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:54.747310  166802 retry.go:31] will retry after 734.856022ms: waiting for machine to come up
	I0617 12:01:55.484571  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:55.485127  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:55.485157  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:55.485066  166802 retry.go:31] will retry after 1.198669701s: waiting for machine to come up
	I0617 12:01:56.685201  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:56.685468  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:56.685493  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:56.685440  166802 retry.go:31] will retry after 1.562509853s: waiting for machine to come up
	I0617 12:01:54.026903  165698 crio.go:462] duration metric: took 1.840117639s to copy over tarball
	I0617 12:01:54.027003  165698 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:01:57.049870  165698 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.022814584s)
	I0617 12:01:57.049904  165698 crio.go:469] duration metric: took 3.022967677s to extract the tarball
	I0617 12:01:57.049914  165698 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:01:57.094589  165698 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:57.133299  165698 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0617 12:01:57.133331  165698 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.133451  165698 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.133456  165698 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.133477  165698 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.133530  165698 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.133626  165698 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.135990  165698 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.135994  165698 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.135985  165698 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0617 12:01:57.136041  165698 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.136041  165698 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.289271  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.299061  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.322581  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.336462  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.337619  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.350335  165698 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0617 12:01:57.350395  165698 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.350448  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.357972  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0617 12:01:57.391517  165698 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0617 12:01:57.391563  165698 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.391640  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.419438  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.442111  165698 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0617 12:01:57.442154  165698 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.442200  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.450145  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.485873  165698 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0617 12:01:57.485922  165698 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0617 12:01:57.485942  165698 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.485957  165698 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.485996  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.486003  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.486053  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.490584  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.490669  165698 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0617 12:01:57.490714  165698 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0617 12:01:57.490755  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.551564  165698 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0617 12:01:57.551597  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.551619  165698 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.551662  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.660683  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0617 12:01:57.660732  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.660799  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0617 12:01:57.660856  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0617 12:01:57.660734  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.660903  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0617 12:01:57.660930  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.753965  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0617 12:01:57.753981  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0617 12:01:57.754069  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0617 12:01:57.754069  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0617 12:01:57.754146  165698 cache_images.go:92] duration metric: took 620.797178ms to LoadCachedImages
	W0617 12:01:57.754271  165698 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0617 12:01:57.754292  165698 kubeadm.go:928] updating node { 192.168.61.164 8443 v1.20.0 crio true true} ...
	I0617 12:01:57.754415  165698 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-003661 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:01:57.754489  165698 ssh_runner.go:195] Run: crio config
	I0617 12:01:57.807120  165698 cni.go:84] Creating CNI manager for ""
	I0617 12:01:57.807144  165698 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:57.807158  165698 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:01:57.807182  165698 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.164 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-003661 NodeName:old-k8s-version-003661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0617 12:01:57.807370  165698 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-003661"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:01:57.807437  165698 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0617 12:01:57.817865  165698 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:01:57.817940  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:01:57.829796  165698 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0617 12:01:57.847758  165698 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:01:57.866182  165698 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0617 12:01:57.884500  165698 ssh_runner.go:195] Run: grep 192.168.61.164	control-plane.minikube.internal$ /etc/hosts
	I0617 12:01:57.888852  165698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:57.902176  165698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:55.138418  165060 pod_ready.go:102] pod "etcd-embed-certs-136195" in "kube-system" namespace has status "Ready":"False"
	I0617 12:01:55.641014  165060 pod_ready.go:92] pod "etcd-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:55.641047  165060 pod_ready.go:81] duration metric: took 2.509383461s for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:55.641061  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.151759  165060 pod_ready.go:92] pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.151788  165060 pod_ready.go:81] duration metric: took 510.718192ms for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.152027  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.157234  165060 pod_ready.go:92] pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.157260  165060 pod_ready.go:81] duration metric: took 5.220069ms for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.157273  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.161767  165060 pod_ready.go:92] pod "kube-proxy-25d5n" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.161787  165060 pod_ready.go:81] duration metric: took 4.50732ms for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.161796  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.717763  165060 pod_ready.go:92] pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.717865  165060 pod_ready.go:81] duration metric: took 556.058292ms for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.717892  165060 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:58.249594  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:58.250033  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:58.250069  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:58.250019  166802 retry.go:31] will retry after 2.154567648s: waiting for machine to come up
	I0617 12:02:00.406269  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:00.406668  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:02:00.406702  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:02:00.406615  166802 retry.go:31] will retry after 2.065044206s: waiting for machine to come up
	I0617 12:01:58.049361  165698 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:58.067893  165698 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661 for IP: 192.168.61.164
	I0617 12:01:58.067924  165698 certs.go:194] generating shared ca certs ...
	I0617 12:01:58.067945  165698 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:58.068162  165698 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:01:58.068221  165698 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:01:58.068236  165698 certs.go:256] generating profile certs ...
	I0617 12:01:58.068352  165698 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.key
	I0617 12:01:58.068438  165698 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key.6c1f259c
	I0617 12:01:58.068493  165698 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.key
	I0617 12:01:58.068647  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:01:58.068690  165698 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:01:58.068704  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:01:58.068743  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:01:58.068790  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:01:58.068824  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:01:58.068877  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:58.069548  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:01:58.109048  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:01:58.134825  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:01:58.159910  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:01:58.191108  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0617 12:01:58.217407  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:01:58.242626  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:01:58.267261  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0617 12:01:58.291562  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:01:58.321848  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:01:58.352361  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:01:58.379343  165698 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:01:58.399146  165698 ssh_runner.go:195] Run: openssl version
	I0617 12:01:58.405081  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:01:58.415471  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.420046  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.420099  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.425886  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:01:58.436575  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:01:58.447166  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.451523  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.451582  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.457670  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:01:58.468667  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:01:58.479095  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.483744  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.483796  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.489520  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:01:58.500298  165698 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:01:58.504859  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:01:58.510619  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:01:58.516819  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:01:58.522837  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:01:58.528736  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:01:58.534585  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:01:58.540464  165698 kubeadm.go:391] StartCluster: {Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.164 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:01:58.540549  165698 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:01:58.540624  165698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:58.583638  165698 cri.go:89] found id: ""
	I0617 12:01:58.583724  165698 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:01:58.594266  165698 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:01:58.594290  165698 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:01:58.594295  165698 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:01:58.594354  165698 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:01:58.604415  165698 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:01:58.605367  165698 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-003661" does not appear in /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:01:58.605949  165698 kubeconfig.go:62] /home/jenkins/minikube-integration/19084-112967/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-003661" cluster setting kubeconfig missing "old-k8s-version-003661" context setting]
	I0617 12:01:58.606833  165698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:58.662621  165698 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:01:58.673813  165698 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.164
	I0617 12:01:58.673848  165698 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:01:58.673863  165698 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:01:58.673907  165698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:58.712607  165698 cri.go:89] found id: ""
	I0617 12:01:58.712703  165698 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:01:58.731676  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:01:58.741645  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:01:58.741666  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:01:58.741709  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:01:58.750871  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:01:58.750931  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:01:58.760545  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:01:58.769701  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:01:58.769776  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:01:58.779348  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:01:58.788507  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:01:58.788566  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:01:58.799220  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:01:58.808403  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:01:58.808468  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:01:58.818169  165698 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:01:58.828079  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:58.962164  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:59.679319  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:59.903216  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:00.026243  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:00.126201  165698 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:00.126314  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:00.627227  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:01.126836  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:01.626524  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:02.126619  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:02.626434  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:58.727229  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:01.226021  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:02.473035  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:02.473477  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:02:02.473505  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:02:02.473458  166802 retry.go:31] will retry after 3.132988331s: waiting for machine to come up
	I0617 12:02:05.607981  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:05.608354  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:02:05.608391  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:02:05.608310  166802 retry.go:31] will retry after 3.312972752s: waiting for machine to come up
	I0617 12:02:03.126687  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:03.626469  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:04.126347  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:04.626548  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:05.127142  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:05.626937  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:06.126479  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:06.626466  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:07.126806  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:07.626814  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:03.724216  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:06.224335  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:08.224842  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:10.217135  164809 start.go:364] duration metric: took 54.298812889s to acquireMachinesLock for "no-preload-152830"
	I0617 12:02:10.217192  164809 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:02:10.217204  164809 fix.go:54] fixHost starting: 
	I0617 12:02:10.217633  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:10.217673  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:10.238636  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44149
	I0617 12:02:10.239091  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:10.239596  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:02:10.239622  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:10.239997  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:10.240214  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:10.240397  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:02:10.242141  164809 fix.go:112] recreateIfNeeded on no-preload-152830: state=Stopped err=<nil>
	I0617 12:02:10.242162  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	W0617 12:02:10.242324  164809 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:02:10.244888  164809 out.go:177] * Restarting existing kvm2 VM for "no-preload-152830" ...
	I0617 12:02:08.922547  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.922966  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Found IP for machine: 192.168.50.125
	I0617 12:02:08.922996  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Reserving static IP address...
	I0617 12:02:08.923013  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has current primary IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.923437  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-991309", mac: "52:54:00:4e:6e:f5", ip: "192.168.50.125"} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:08.923484  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Reserved static IP address: 192.168.50.125
	I0617 12:02:08.923514  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | skip adding static IP to network mk-default-k8s-diff-port-991309 - found existing host DHCP lease matching {name: "default-k8s-diff-port-991309", mac: "52:54:00:4e:6e:f5", ip: "192.168.50.125"}
	I0617 12:02:08.923533  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Getting to WaitForSSH function...
	I0617 12:02:08.923550  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for SSH to be available...
	I0617 12:02:08.925667  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.926017  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:08.926050  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.926203  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Using SSH client type: external
	I0617 12:02:08.926228  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa (-rw-------)
	I0617 12:02:08.926269  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:02:08.926290  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | About to run SSH command:
	I0617 12:02:08.926316  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | exit 0
	I0617 12:02:09.051973  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | SSH cmd err, output: <nil>: 
	I0617 12:02:09.052329  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetConfigRaw
	I0617 12:02:09.052946  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:09.055156  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.055509  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.055541  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.055748  166103 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/config.json ...
	I0617 12:02:09.055940  166103 machine.go:94] provisionDockerMachine start ...
	I0617 12:02:09.055960  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:09.056162  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.058451  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.058826  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.058860  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.058961  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.059155  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.059289  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.059440  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.059583  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.059796  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.059813  166103 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:02:09.163974  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:02:09.164020  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetMachineName
	I0617 12:02:09.164281  166103 buildroot.go:166] provisioning hostname "default-k8s-diff-port-991309"
	I0617 12:02:09.164312  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetMachineName
	I0617 12:02:09.164499  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.167194  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.167606  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.167632  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.167856  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.168097  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.168285  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.168414  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.168571  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.168795  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.168811  166103 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-991309 && echo "default-k8s-diff-port-991309" | sudo tee /etc/hostname
	I0617 12:02:09.290435  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-991309
	
	I0617 12:02:09.290470  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.293538  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.293879  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.293902  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.294132  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.294361  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.294574  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.294753  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.294943  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.295188  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.295209  166103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-991309' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-991309/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-991309' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:02:09.408702  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:02:09.408736  166103 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:02:09.408777  166103 buildroot.go:174] setting up certificates
	I0617 12:02:09.408789  166103 provision.go:84] configureAuth start
	I0617 12:02:09.408798  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetMachineName
	I0617 12:02:09.409122  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:09.411936  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.412304  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.412335  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.412522  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.414598  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.414914  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.414942  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.415054  166103 provision.go:143] copyHostCerts
	I0617 12:02:09.415121  166103 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:02:09.415132  166103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:02:09.415182  166103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:02:09.415264  166103 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:02:09.415271  166103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:02:09.415290  166103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:02:09.415344  166103 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:02:09.415353  166103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:02:09.415378  166103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:02:09.415439  166103 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-991309 san=[127.0.0.1 192.168.50.125 default-k8s-diff-port-991309 localhost minikube]
	I0617 12:02:09.534010  166103 provision.go:177] copyRemoteCerts
	I0617 12:02:09.534082  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:02:09.534121  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.536707  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.537143  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.537176  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.537352  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.537516  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.537687  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.537840  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:09.622292  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0617 12:02:09.652653  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:02:09.676801  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:02:09.700701  166103 provision.go:87] duration metric: took 291.898478ms to configureAuth
	I0617 12:02:09.700734  166103 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:02:09.700931  166103 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:02:09.701023  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.703710  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.704138  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.704171  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.704330  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.704537  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.704730  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.704895  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.705058  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.705243  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.705262  166103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:02:09.974077  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:02:09.974109  166103 machine.go:97] duration metric: took 918.156221ms to provisionDockerMachine
	I0617 12:02:09.974120  166103 start.go:293] postStartSetup for "default-k8s-diff-port-991309" (driver="kvm2")
	I0617 12:02:09.974131  166103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:02:09.974155  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:09.974502  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:02:09.974544  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.977677  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.978073  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.978097  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.978225  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.978407  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.978583  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.978734  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:10.067068  166103 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:02:10.071843  166103 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:02:10.071870  166103 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:02:10.071934  166103 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:02:10.072024  166103 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:02:10.072128  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:02:10.082041  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:10.107855  166103 start.go:296] duration metric: took 133.717924ms for postStartSetup
	I0617 12:02:10.107903  166103 fix.go:56] duration metric: took 19.607369349s for fixHost
	I0617 12:02:10.107932  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:10.110742  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.111135  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.111169  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.111294  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:10.111527  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.111674  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.111861  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:10.111980  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:10.112205  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:10.112220  166103 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:02:10.216945  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625730.186446687
	
	I0617 12:02:10.216973  166103 fix.go:216] guest clock: 1718625730.186446687
	I0617 12:02:10.216983  166103 fix.go:229] Guest: 2024-06-17 12:02:10.186446687 +0000 UTC Remote: 2024-06-17 12:02:10.107909348 +0000 UTC m=+152.716337101 (delta=78.537339ms)
	I0617 12:02:10.217033  166103 fix.go:200] guest clock delta is within tolerance: 78.537339ms
	I0617 12:02:10.217039  166103 start.go:83] releasing machines lock for "default-k8s-diff-port-991309", held for 19.716554323s
	I0617 12:02:10.217073  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.217363  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:10.220429  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.220897  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.220927  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.221083  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.221655  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.221870  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.221965  166103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:02:10.222026  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:10.222094  166103 ssh_runner.go:195] Run: cat /version.json
	I0617 12:02:10.222122  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:10.225337  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.225673  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.225710  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.225730  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.226015  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:10.226172  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.226202  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.226242  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.226363  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:10.226447  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:10.226508  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.226591  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:10.226687  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:10.226840  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:10.334316  166103 ssh_runner.go:195] Run: systemctl --version
	I0617 12:02:10.340584  166103 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:02:10.489359  166103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:02:10.497198  166103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:02:10.497267  166103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:02:10.517001  166103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:02:10.517032  166103 start.go:494] detecting cgroup driver to use...
	I0617 12:02:10.517110  166103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:02:10.536520  166103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:02:10.550478  166103 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:02:10.550542  166103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:02:10.564437  166103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:02:10.578554  166103 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:02:10.710346  166103 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:02:10.891637  166103 docker.go:233] disabling docker service ...
	I0617 12:02:10.891694  166103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:02:10.908300  166103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:02:10.921663  166103 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:02:11.062715  166103 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:02:11.201061  166103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:02:11.216120  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:02:11.237213  166103 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:02:11.237286  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.248171  166103 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:02:11.248238  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.259159  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.270217  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.280841  166103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:02:11.291717  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.302084  166103 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.319559  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.331992  166103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:02:11.342435  166103 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:02:11.342494  166103 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:02:11.357436  166103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:02:11.367406  166103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:11.493416  166103 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:02:11.629980  166103 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:02:11.630055  166103 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:02:11.636456  166103 start.go:562] Will wait 60s for crictl version
	I0617 12:02:11.636540  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:02:11.642817  166103 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:02:11.681563  166103 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:02:11.681655  166103 ssh_runner.go:195] Run: crio --version
	I0617 12:02:11.712576  166103 ssh_runner.go:195] Run: crio --version
	I0617 12:02:11.753826  166103 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:02:11.755256  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:11.758628  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:11.759006  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:11.759041  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:11.759252  166103 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0617 12:02:11.763743  166103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:11.780286  166103 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:02:11.780455  166103 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:02:11.780528  166103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:02:11.819396  166103 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:02:11.819481  166103 ssh_runner.go:195] Run: which lz4
	I0617 12:02:11.824047  166103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:02:11.828770  166103 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:02:11.828807  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0617 12:02:08.127233  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:08.626498  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:09.126712  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:09.627284  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.126446  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.627249  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:11.126428  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:11.626638  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:12.127091  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:12.627361  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.226209  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:12.227824  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:10.246388  164809 main.go:141] libmachine: (no-preload-152830) Calling .Start
	I0617 12:02:10.246608  164809 main.go:141] libmachine: (no-preload-152830) Ensuring networks are active...
	I0617 12:02:10.247397  164809 main.go:141] libmachine: (no-preload-152830) Ensuring network default is active
	I0617 12:02:10.247789  164809 main.go:141] libmachine: (no-preload-152830) Ensuring network mk-no-preload-152830 is active
	I0617 12:02:10.248192  164809 main.go:141] libmachine: (no-preload-152830) Getting domain xml...
	I0617 12:02:10.248869  164809 main.go:141] libmachine: (no-preload-152830) Creating domain...
	I0617 12:02:11.500721  164809 main.go:141] libmachine: (no-preload-152830) Waiting to get IP...
	I0617 12:02:11.501614  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:11.502169  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:11.502254  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:11.502131  166976 retry.go:31] will retry after 281.343691ms: waiting for machine to come up
	I0617 12:02:11.785597  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:11.786047  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:11.786082  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:11.785983  166976 retry.go:31] will retry after 303.221815ms: waiting for machine to come up
	I0617 12:02:12.090367  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:12.090919  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:12.090945  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:12.090826  166976 retry.go:31] will retry after 422.250116ms: waiting for machine to come up
	I0617 12:02:12.514456  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:12.515026  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:12.515055  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:12.515001  166976 retry.go:31] will retry after 513.394077ms: waiting for machine to come up
	I0617 12:02:13.029811  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:13.030495  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:13.030522  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:13.030449  166976 retry.go:31] will retry after 596.775921ms: waiting for machine to come up
	I0617 12:02:13.387031  166103 crio.go:462] duration metric: took 1.563017054s to copy over tarball
	I0617 12:02:13.387108  166103 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:02:15.664139  166103 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.276994761s)
	I0617 12:02:15.664177  166103 crio.go:469] duration metric: took 2.277117031s to extract the tarball
	I0617 12:02:15.664188  166103 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:02:15.703690  166103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:02:15.757605  166103 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 12:02:15.757634  166103 cache_images.go:84] Images are preloaded, skipping loading
	I0617 12:02:15.757644  166103 kubeadm.go:928] updating node { 192.168.50.125 8444 v1.30.1 crio true true} ...
	I0617 12:02:15.757784  166103 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-991309 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:02:15.757874  166103 ssh_runner.go:195] Run: crio config
	I0617 12:02:15.808350  166103 cni.go:84] Creating CNI manager for ""
	I0617 12:02:15.808380  166103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:15.808397  166103 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:02:15.808434  166103 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.125 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-991309 NodeName:default-k8s-diff-port-991309 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:02:15.808633  166103 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.125
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-991309"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:02:15.808709  166103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:02:15.818891  166103 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:02:15.818964  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:02:15.828584  166103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0617 12:02:15.846044  166103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:02:15.862572  166103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0617 12:02:15.880042  166103 ssh_runner.go:195] Run: grep 192.168.50.125	control-plane.minikube.internal$ /etc/hosts
	I0617 12:02:15.884470  166103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:15.897031  166103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:16.013826  166103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:02:16.030366  166103 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309 for IP: 192.168.50.125
	I0617 12:02:16.030391  166103 certs.go:194] generating shared ca certs ...
	I0617 12:02:16.030408  166103 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:16.030590  166103 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:02:16.030650  166103 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:02:16.030668  166103 certs.go:256] generating profile certs ...
	I0617 12:02:16.030793  166103 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/client.key
	I0617 12:02:16.030876  166103 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/apiserver.key.02769a34
	I0617 12:02:16.030919  166103 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/proxy-client.key
	I0617 12:02:16.031024  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:02:16.031051  166103 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:02:16.031060  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:02:16.031080  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:02:16.031103  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:02:16.031122  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:02:16.031179  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:16.031991  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:02:16.066789  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:02:16.094522  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:02:16.119693  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:02:16.155810  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0617 12:02:16.186788  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:02:16.221221  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:02:16.248948  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:02:16.273404  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:02:16.296958  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:02:16.320047  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:02:16.349598  166103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:02:16.367499  166103 ssh_runner.go:195] Run: openssl version
	I0617 12:02:16.373596  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:02:16.384778  166103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:02:16.389521  166103 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:02:16.389574  166103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:02:16.395523  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:02:16.406357  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:02:16.417139  166103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:02:16.421629  166103 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:02:16.421679  166103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:02:16.427323  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:02:16.438649  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:02:16.450042  166103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:16.454587  166103 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:16.454636  166103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:16.460677  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:02:16.472886  166103 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:02:16.477630  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:02:16.483844  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:02:16.490123  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:02:16.497606  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:02:16.504066  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:02:16.510597  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:02:16.518270  166103 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:02:16.518371  166103 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:02:16.518439  166103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:16.569103  166103 cri.go:89] found id: ""
	I0617 12:02:16.569179  166103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:02:16.580328  166103 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:02:16.580353  166103 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:02:16.580360  166103 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:02:16.580409  166103 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:02:16.591277  166103 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:02:16.592450  166103 kubeconfig.go:125] found "default-k8s-diff-port-991309" server: "https://192.168.50.125:8444"
	I0617 12:02:16.594770  166103 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:02:16.605669  166103 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.125
	I0617 12:02:16.605728  166103 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:02:16.605745  166103 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:02:16.605810  166103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:16.654529  166103 cri.go:89] found id: ""
	I0617 12:02:16.654620  166103 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:02:16.672923  166103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:02:16.683485  166103 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:02:16.683514  166103 kubeadm.go:156] found existing configuration files:
	
	I0617 12:02:16.683576  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0617 12:02:16.693533  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:02:16.693614  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:02:16.703670  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0617 12:02:16.716352  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:02:16.716413  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:02:16.729336  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0617 12:02:16.739183  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:02:16.739249  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:02:16.748978  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0617 12:02:16.758195  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:02:16.758262  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:02:16.767945  166103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:02:16.777773  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:16.919605  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:13.126836  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:13.626460  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.127261  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.627161  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:15.126580  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:15.627082  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:16.127163  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:16.626524  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:17.126469  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:17.626488  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.728717  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:17.225452  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:13.629097  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:13.629723  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:13.629826  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:13.629705  166976 retry.go:31] will retry after 588.18471ms: waiting for machine to come up
	I0617 12:02:14.219111  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:14.219672  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:14.219704  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:14.219611  166976 retry.go:31] will retry after 889.359727ms: waiting for machine to come up
	I0617 12:02:15.110916  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:15.111528  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:15.111559  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:15.111473  166976 retry.go:31] will retry after 1.139454059s: waiting for machine to come up
	I0617 12:02:16.252051  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:16.252601  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:16.252636  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:16.252534  166976 retry.go:31] will retry after 1.189357648s: waiting for machine to come up
	I0617 12:02:17.443845  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:17.444370  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:17.444403  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:17.444310  166976 retry.go:31] will retry after 1.614769478s: waiting for machine to come up
	I0617 12:02:18.068811  166103 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.149162388s)
	I0617 12:02:18.068870  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:18.301209  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:18.362153  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:18.454577  166103 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:18.454674  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:18.954929  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.454795  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.505453  166103 api_server.go:72] duration metric: took 1.050874914s to wait for apiserver process to appear ...
	I0617 12:02:19.505490  166103 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:02:19.505518  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:19.506056  166103 api_server.go:269] stopped: https://192.168.50.125:8444/healthz: Get "https://192.168.50.125:8444/healthz": dial tcp 192.168.50.125:8444: connect: connection refused
	I0617 12:02:20.005681  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:22.216162  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:22.216214  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:22.216234  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:22.239561  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:22.239635  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:18.126897  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:18.627145  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.126724  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.626498  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:20.126389  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:20.627190  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:21.126480  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:21.627210  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:22.127273  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:22.626691  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.227344  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:21.725689  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:19.061035  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:19.061555  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:19.061588  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:19.061520  166976 retry.go:31] will retry after 2.385838312s: waiting for machine to come up
	I0617 12:02:21.448745  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:21.449239  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:21.449266  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:21.449208  166976 retry.go:31] will retry after 3.308788046s: waiting for machine to come up
	I0617 12:02:22.505636  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:22.509888  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:22.509916  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:23.006285  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:23.011948  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:23.011983  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:23.505640  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:23.510358  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 200:
	ok
	I0617 12:02:23.516663  166103 api_server.go:141] control plane version: v1.30.1
	I0617 12:02:23.516686  166103 api_server.go:131] duration metric: took 4.011188976s to wait for apiserver health ...
	I0617 12:02:23.516694  166103 cni.go:84] Creating CNI manager for ""
	I0617 12:02:23.516700  166103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:23.518498  166103 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:02:23.519722  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:02:23.530145  166103 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:02:23.552805  166103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:02:23.564825  166103 system_pods.go:59] 8 kube-system pods found
	I0617 12:02:23.564853  166103 system_pods.go:61] "coredns-7db6d8ff4d-mnw24" [1e6c4ff3-f0dc-43da-abd8-baaed7dca40c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0617 12:02:23.564863  166103 system_pods.go:61] "etcd-default-k8s-diff-port-991309" [820a4f27-cf83-4edb-a2ea-edba6673d851] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0617 12:02:23.564871  166103 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991309" [26e6c19d-6f70-4924-83f5-563c8508c9e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 12:02:23.564877  166103 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991309" [01e7c468-98a6-48f3-a158-59e97fa8279c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 12:02:23.564885  166103 system_pods.go:61] "kube-proxy-jn5kp" [d6935148-7ee8-4655-8327-9f1ee4c933de] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0617 12:02:23.564894  166103 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991309" [53ecd22c-05cf-48a5-b7e5-925392085f7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 12:02:23.564899  166103 system_pods.go:61] "metrics-server-569cc877fc-n2svp" [5b637d97-3183-4324-98cf-dd69a2968578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:02:23.564908  166103 system_pods.go:61] "storage-provisioner" [92b20aec-29c2-4256-86be-7f58f66585dd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 12:02:23.564913  166103 system_pods.go:74] duration metric: took 12.089276ms to wait for pod list to return data ...
	I0617 12:02:23.564919  166103 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:02:23.573455  166103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:02:23.573480  166103 node_conditions.go:123] node cpu capacity is 2
	I0617 12:02:23.573492  166103 node_conditions.go:105] duration metric: took 8.568721ms to run NodePressure ...
	I0617 12:02:23.573509  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:23.918292  166103 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 12:02:23.922992  166103 kubeadm.go:733] kubelet initialised
	I0617 12:02:23.923019  166103 kubeadm.go:734] duration metric: took 4.69627ms waiting for restarted kubelet to initialise ...
	I0617 12:02:23.923027  166103 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:23.927615  166103 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.932203  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.932225  166103 pod_ready.go:81] duration metric: took 4.590359ms for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.932233  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.932239  166103 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.936802  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.936825  166103 pod_ready.go:81] duration metric: took 4.579036ms for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.936835  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.936840  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.942877  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.942903  166103 pod_ready.go:81] duration metric: took 6.055748ms for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.942927  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.942935  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.955830  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.955851  166103 pod_ready.go:81] duration metric: took 12.903911ms for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.955861  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.955869  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:24.356654  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-proxy-jn5kp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.356682  166103 pod_ready.go:81] duration metric: took 400.805294ms for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:24.356692  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-proxy-jn5kp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.356699  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:24.765108  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.765133  166103 pod_ready.go:81] duration metric: took 408.42568ms for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:24.765145  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.765152  166103 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:25.156898  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:25.156927  166103 pod_ready.go:81] duration metric: took 391.769275ms for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:25.156939  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:25.156946  166103 pod_ready.go:38] duration metric: took 1.233911476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:25.156968  166103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:02:25.170925  166103 ops.go:34] apiserver oom_adj: -16
	I0617 12:02:25.170963  166103 kubeadm.go:591] duration metric: took 8.590593327s to restartPrimaryControlPlane
	I0617 12:02:25.170976  166103 kubeadm.go:393] duration metric: took 8.652716269s to StartCluster
	I0617 12:02:25.170998  166103 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:25.171111  166103 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:02:25.173919  166103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:25.174286  166103 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:02:25.176186  166103 out.go:177] * Verifying Kubernetes components...
	I0617 12:02:25.174347  166103 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:02:25.174528  166103 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:02:25.177622  166103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:25.177632  166103 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-991309"
	I0617 12:02:25.177670  166103 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-991309"
	W0617 12:02:25.177684  166103 addons.go:243] addon metrics-server should already be in state true
	I0617 12:02:25.177721  166103 host.go:66] Checking if "default-k8s-diff-port-991309" exists ...
	I0617 12:02:25.177622  166103 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-991309"
	I0617 12:02:25.177789  166103 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-991309"
	W0617 12:02:25.177806  166103 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:02:25.177837  166103 host.go:66] Checking if "default-k8s-diff-port-991309" exists ...
	I0617 12:02:25.177628  166103 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-991309"
	I0617 12:02:25.177875  166103 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-991309"
	I0617 12:02:25.178173  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.178202  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.178251  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.178282  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.178299  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.178318  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.198817  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0617 12:02:25.199064  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0617 12:02:25.199513  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I0617 12:02:25.199902  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.199919  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.200633  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.201080  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.201110  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.201270  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.201286  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.201415  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.201427  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.201482  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.201786  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.201845  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.202268  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.202637  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.202663  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.202989  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.203038  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.206439  166103 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-991309"
	W0617 12:02:25.206462  166103 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:02:25.206492  166103 host.go:66] Checking if "default-k8s-diff-port-991309" exists ...
	I0617 12:02:25.206875  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.206921  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.218501  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37189
	I0617 12:02:25.218532  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34089
	I0617 12:02:25.218912  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.218986  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.219410  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.219429  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.219545  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.219561  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.219917  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.219920  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.220110  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.220111  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.221839  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:25.223920  166103 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:02:25.225213  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:02:25.225232  166103 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:02:25.225260  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:25.224029  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:25.228780  166103 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:25.227545  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0617 12:02:25.230084  166103 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:02:25.230100  166103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:02:25.230113  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:25.228465  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.229054  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:25.230179  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:25.229303  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.230215  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.230371  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:25.230542  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:25.230674  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:25.230723  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.230737  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.231150  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.231772  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.231802  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.234036  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.234476  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:25.234494  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.234755  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:25.234919  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:25.235079  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:25.235235  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:25.248352  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46349
	I0617 12:02:25.248851  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.249306  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.249330  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.249681  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.249873  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.251282  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:25.251512  166103 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:02:25.251529  166103 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:02:25.251551  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:25.253963  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.254458  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:25.254484  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.254628  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:25.254941  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:25.255229  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:25.255385  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:25.391207  166103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:02:25.411906  166103 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-991309" to be "Ready" ...
	I0617 12:02:25.476025  166103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:02:25.566470  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:02:25.566500  166103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:02:25.593744  166103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:02:25.620336  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:02:25.620371  166103 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:02:25.700009  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:02:25.700048  166103 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:02:25.769841  166103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:02:25.782207  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:25.782240  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:25.782576  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Closing plugin on server side
	I0617 12:02:25.782597  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:25.782610  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:25.782623  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:25.782632  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:25.782888  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:25.782916  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:25.789639  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:25.789662  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:25.789921  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:25.789941  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.600819  166103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.007014283s)
	I0617 12:02:26.600883  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.600898  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.600902  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.600917  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.601253  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601295  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.601305  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.601325  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601342  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.601353  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.601366  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.601370  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.601571  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Closing plugin on server side
	I0617 12:02:26.601590  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601600  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.601615  166103 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-991309"
	I0617 12:02:26.601626  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601635  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Closing plugin on server side
	I0617 12:02:26.601638  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.604200  166103 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0617 12:02:26.605477  166103 addons.go:510] duration metric: took 1.431148263s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0617 12:02:27.415122  166103 node_ready.go:53] node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.126888  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:23.627274  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.127019  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.627337  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:25.126642  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:25.627064  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:26.126606  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:26.626803  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:27.126825  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:27.626799  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.223344  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:26.225129  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:24.760577  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:24.761063  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:24.761095  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:24.760999  166976 retry.go:31] will retry after 3.793168135s: waiting for machine to come up
	I0617 12:02:28.558153  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.558708  164809 main.go:141] libmachine: (no-preload-152830) Found IP for machine: 192.168.39.173
	I0617 12:02:28.558735  164809 main.go:141] libmachine: (no-preload-152830) Reserving static IP address...
	I0617 12:02:28.558751  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has current primary IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.559214  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "no-preload-152830", mac: "52:54:00:c0:1a:fb", ip: "192.168.39.173"} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.559248  164809 main.go:141] libmachine: (no-preload-152830) DBG | skip adding static IP to network mk-no-preload-152830 - found existing host DHCP lease matching {name: "no-preload-152830", mac: "52:54:00:c0:1a:fb", ip: "192.168.39.173"}
	I0617 12:02:28.559263  164809 main.go:141] libmachine: (no-preload-152830) Reserved static IP address: 192.168.39.173
	I0617 12:02:28.559278  164809 main.go:141] libmachine: (no-preload-152830) Waiting for SSH to be available...
	I0617 12:02:28.559295  164809 main.go:141] libmachine: (no-preload-152830) DBG | Getting to WaitForSSH function...
	I0617 12:02:28.562122  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.562453  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.562482  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.562678  164809 main.go:141] libmachine: (no-preload-152830) DBG | Using SSH client type: external
	I0617 12:02:28.562706  164809 main.go:141] libmachine: (no-preload-152830) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa (-rw-------)
	I0617 12:02:28.562739  164809 main.go:141] libmachine: (no-preload-152830) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.173 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:02:28.562753  164809 main.go:141] libmachine: (no-preload-152830) DBG | About to run SSH command:
	I0617 12:02:28.562770  164809 main.go:141] libmachine: (no-preload-152830) DBG | exit 0
	I0617 12:02:28.687683  164809 main.go:141] libmachine: (no-preload-152830) DBG | SSH cmd err, output: <nil>: 
	I0617 12:02:28.688021  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetConfigRaw
	I0617 12:02:28.688649  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:28.691248  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.691585  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.691609  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.691895  164809 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/config.json ...
	I0617 12:02:28.692109  164809 machine.go:94] provisionDockerMachine start ...
	I0617 12:02:28.692132  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:28.692371  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:28.694371  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.694738  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.694766  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.694942  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:28.695130  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.695309  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.695490  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:28.695695  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:28.695858  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:28.695869  164809 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:02:28.803687  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:02:28.803726  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:02:28.803996  164809 buildroot.go:166] provisioning hostname "no-preload-152830"
	I0617 12:02:28.804031  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:02:28.804333  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:28.806959  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.807395  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.807424  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.807547  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:28.807725  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.807895  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.808057  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:28.808216  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:28.808420  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:28.808436  164809 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-152830 && echo "no-preload-152830" | sudo tee /etc/hostname
	I0617 12:02:28.931222  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-152830
	
	I0617 12:02:28.931259  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:28.934188  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.934536  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.934564  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.934822  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:28.935048  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.935218  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.935353  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:28.935593  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:28.935814  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:28.935837  164809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-152830' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-152830/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-152830' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:02:29.054126  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:02:29.054156  164809 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:02:29.054173  164809 buildroot.go:174] setting up certificates
	I0617 12:02:29.054184  164809 provision.go:84] configureAuth start
	I0617 12:02:29.054195  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:02:29.054490  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:29.057394  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.057797  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.057830  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.057963  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.060191  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.060485  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.060514  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.060633  164809 provision.go:143] copyHostCerts
	I0617 12:02:29.060708  164809 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:02:29.060722  164809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:02:29.060796  164809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:02:29.060963  164809 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:02:29.060978  164809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:02:29.061003  164809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:02:29.061065  164809 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:02:29.061072  164809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:02:29.061090  164809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:02:29.061139  164809 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.no-preload-152830 san=[127.0.0.1 192.168.39.173 localhost minikube no-preload-152830]
	I0617 12:02:29.321179  164809 provision.go:177] copyRemoteCerts
	I0617 12:02:29.321232  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:02:29.321256  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.324217  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.324612  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.324642  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.324836  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.325043  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.325227  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.325386  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:29.410247  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:02:29.435763  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0617 12:02:29.462900  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:02:29.491078  164809 provision.go:87] duration metric: took 436.876068ms to configureAuth
	I0617 12:02:29.491120  164809 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:02:29.491377  164809 config.go:182] Loaded profile config "no-preload-152830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:02:29.491522  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.494581  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.495019  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.495052  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.495245  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.495555  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.495766  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.495897  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.496068  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:29.496275  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:29.496296  164809 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:02:29.774692  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:02:29.774730  164809 machine.go:97] duration metric: took 1.082604724s to provisionDockerMachine
	I0617 12:02:29.774748  164809 start.go:293] postStartSetup for "no-preload-152830" (driver="kvm2")
	I0617 12:02:29.774765  164809 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:02:29.774785  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:29.775181  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:02:29.775220  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.778574  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.778959  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.778988  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.779154  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.779351  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.779575  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.779750  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:29.866959  164809 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:02:29.871319  164809 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:02:29.871348  164809 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:02:29.871425  164809 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:02:29.871535  164809 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:02:29.871648  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:02:29.881995  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:29.907614  164809 start.go:296] duration metric: took 132.84708ms for postStartSetup
	I0617 12:02:29.907669  164809 fix.go:56] duration metric: took 19.690465972s for fixHost
	I0617 12:02:29.907695  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.910226  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.910617  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.910644  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.910811  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.911162  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.911377  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.911571  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.911772  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:29.911961  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:29.911972  164809 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:02:30.021051  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625749.993041026
	
	I0617 12:02:30.021079  164809 fix.go:216] guest clock: 1718625749.993041026
	I0617 12:02:30.021088  164809 fix.go:229] Guest: 2024-06-17 12:02:29.993041026 +0000 UTC Remote: 2024-06-17 12:02:29.907674102 +0000 UTC m=+356.579226401 (delta=85.366924ms)
	I0617 12:02:30.021113  164809 fix.go:200] guest clock delta is within tolerance: 85.366924ms
	I0617 12:02:30.021120  164809 start.go:83] releasing machines lock for "no-preload-152830", held for 19.803953246s
	I0617 12:02:30.021148  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.021403  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:30.024093  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.024600  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:30.024633  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.024830  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.025380  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.025552  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.025623  164809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:02:30.025668  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:30.025767  164809 ssh_runner.go:195] Run: cat /version.json
	I0617 12:02:30.025798  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:30.028656  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.028826  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.029037  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:30.029068  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.029294  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:30.029336  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:30.029366  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.029528  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:30.029536  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:30.029764  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:30.029776  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:30.029957  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:30.029984  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:30.030161  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:30.135901  164809 ssh_runner.go:195] Run: systemctl --version
	I0617 12:02:30.142668  164809 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:02:30.296485  164809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:02:30.302789  164809 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:02:30.302856  164809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:02:30.319775  164809 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:02:30.319793  164809 start.go:494] detecting cgroup driver to use...
	I0617 12:02:30.319894  164809 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:02:30.335498  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:02:30.349389  164809 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:02:30.349427  164809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:02:30.363086  164809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:02:30.377383  164809 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:02:30.499956  164809 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:02:30.644098  164809 docker.go:233] disabling docker service ...
	I0617 12:02:30.644178  164809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:02:30.661490  164809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:02:30.675856  164809 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:02:30.819937  164809 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:02:30.932926  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:02:30.947638  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:02:30.966574  164809 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:02:30.966648  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:30.978339  164809 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:02:30.978416  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:30.989950  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.000644  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.011280  164809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:02:31.022197  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.032780  164809 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.050053  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.062065  164809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:02:31.073296  164809 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:02:31.073368  164809 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:02:31.087733  164809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:02:31.098019  164809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:31.232495  164809 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:02:31.371236  164809 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:02:31.371312  164809 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:02:31.376442  164809 start.go:562] Will wait 60s for crictl version
	I0617 12:02:31.376522  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.380416  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:02:31.426664  164809 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:02:31.426763  164809 ssh_runner.go:195] Run: crio --version
	I0617 12:02:31.456696  164809 ssh_runner.go:195] Run: crio --version
	I0617 12:02:31.487696  164809 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:02:29.416369  166103 node_ready.go:53] node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:31.417357  166103 node_ready.go:53] node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:28.126854  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:28.627278  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:29.126577  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:29.626475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:30.127193  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:30.627229  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:31.126478  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:31.626336  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:32.126398  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:32.627005  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:28.724801  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:30.726589  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:33.225707  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:31.488972  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:31.491812  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:31.492191  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:31.492220  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:31.492411  164809 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 12:02:31.497100  164809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:31.510949  164809 kubeadm.go:877] updating cluster {Name:no-preload-152830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-152830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:02:31.511079  164809 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:02:31.511114  164809 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:02:31.546350  164809 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:02:31.546377  164809 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 12:02:31.546440  164809 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.546452  164809 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.546478  164809 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.546485  164809 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.546513  164809 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.546513  164809 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.546458  164809 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.546569  164809 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0617 12:02:31.548101  164809 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.548123  164809 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.548123  164809 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.548137  164809 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.548101  164809 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.548104  164809 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0617 12:02:31.548103  164809 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.548427  164809 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.714107  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.714819  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0617 12:02:31.715764  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.721844  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.722172  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.739873  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.746705  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.814194  164809 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0617 12:02:31.814235  164809 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.814273  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.849549  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.950803  164809 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0617 12:02:31.950858  164809 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.950907  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.950934  164809 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0617 12:02:31.950959  164809 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.950992  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951005  164809 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0617 12:02:31.951030  164809 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0617 12:02:31.951053  164809 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.951090  164809 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0617 12:02:31.951103  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951113  164809 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.951146  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951053  164809 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.951179  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951217  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.951266  164809 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0617 12:02:31.951289  164809 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.951319  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.967596  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.967802  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:32.018505  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:32.018542  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:32.018623  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:32.018664  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0617 12:02:32.018738  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:32.018755  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0617 12:02:32.026154  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0617 12:02:32.026270  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0617 12:02:32.046161  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0617 12:02:32.046288  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0617 12:02:32.126665  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0617 12:02:32.126755  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0617 12:02:32.126765  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0617 12:02:32.126814  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0617 12:02:32.126829  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0617 12:02:32.126867  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0617 12:02:32.126898  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0617 12:02:32.126911  164809 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0617 12:02:32.126935  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0617 12:02:32.126965  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0617 12:02:32.127008  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0617 12:02:32.127058  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0617 12:02:32.127060  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0617 12:02:32.142790  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0617 12:02:32.142816  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0617 12:02:32.143132  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0617 12:02:32.915885  166103 node_ready.go:49] node "default-k8s-diff-port-991309" has status "Ready":"True"
	I0617 12:02:32.915912  166103 node_ready.go:38] duration metric: took 7.503979113s for node "default-k8s-diff-port-991309" to be "Ready" ...
	I0617 12:02:32.915924  166103 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:32.921198  166103 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:34.927290  166103 pod_ready.go:102] pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:33.126753  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:33.627017  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:34.126558  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:34.626976  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.126410  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.627309  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:36.126958  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:36.626349  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:37.126815  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:37.627332  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.724326  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:37.725145  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:36.125679  164809 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1: (3.998551072s)
	I0617 12:02:36.125727  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0617 12:02:36.125773  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.998809852s)
	I0617 12:02:36.125804  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0617 12:02:36.125838  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0617 12:02:36.125894  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0617 12:02:37.885028  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.759100554s)
	I0617 12:02:37.885054  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0617 12:02:37.885073  164809 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0617 12:02:37.885122  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0617 12:02:37.429419  166103 pod_ready.go:102] pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:39.933476  166103 pod_ready.go:92] pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.933508  166103 pod_ready.go:81] duration metric: took 7.012285571s for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.933521  166103 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.940139  166103 pod_ready.go:92] pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.940162  166103 pod_ready.go:81] duration metric: took 6.633405ms for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.940175  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.945285  166103 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.945305  166103 pod_ready.go:81] duration metric: took 5.12303ms for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.945317  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.950992  166103 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.951021  166103 pod_ready.go:81] duration metric: took 5.6962ms for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.951034  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.955874  166103 pod_ready.go:92] pod "kube-proxy-jn5kp" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.955894  166103 pod_ready.go:81] duration metric: took 4.852842ms for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.955905  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:40.327000  166103 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:40.327035  166103 pod_ready.go:81] duration metric: took 371.121545ms for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:40.327049  166103 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:42.334620  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:38.126868  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:38.627367  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.127148  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.626571  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:40.126379  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:40.626747  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:41.126485  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:41.626372  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:42.126904  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:42.627293  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.727666  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:42.223700  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:39.992863  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.10770953s)
	I0617 12:02:39.992903  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0617 12:02:39.992934  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0617 12:02:39.992989  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0617 12:02:41.851420  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.858400961s)
	I0617 12:02:41.851452  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0617 12:02:41.851508  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0617 12:02:41.851578  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0617 12:02:44.833842  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:46.834443  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:43.127137  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:43.626521  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.127017  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.626824  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:45.126475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:45.626535  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:46.127423  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:46.626605  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:47.127029  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:47.627431  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.224685  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:46.225071  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:44.211669  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.360046418s)
	I0617 12:02:44.211702  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0617 12:02:44.211726  164809 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0617 12:02:44.211795  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0617 12:02:45.162389  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0617 12:02:45.162456  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0617 12:02:45.162542  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0617 12:02:47.414088  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.251500525s)
	I0617 12:02:47.414130  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0617 12:02:47.414164  164809 cache_images.go:123] Successfully loaded all cached images
	I0617 12:02:47.414172  164809 cache_images.go:92] duration metric: took 15.867782566s to LoadCachedImages
	I0617 12:02:47.414195  164809 kubeadm.go:928] updating node { 192.168.39.173 8443 v1.30.1 crio true true} ...
	I0617 12:02:47.414359  164809 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-152830 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-152830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:02:47.414451  164809 ssh_runner.go:195] Run: crio config
	I0617 12:02:47.466472  164809 cni.go:84] Creating CNI manager for ""
	I0617 12:02:47.466493  164809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:47.466503  164809 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:02:47.466531  164809 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.173 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-152830 NodeName:no-preload-152830 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:02:47.466716  164809 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-152830"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.173"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:02:47.466793  164809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:02:47.478163  164809 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:02:47.478255  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:02:47.488014  164809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0617 12:02:47.505143  164809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:02:47.522481  164809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0617 12:02:47.545714  164809 ssh_runner.go:195] Run: grep 192.168.39.173	control-plane.minikube.internal$ /etc/hosts
	I0617 12:02:47.551976  164809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.173	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:47.565374  164809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:47.694699  164809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:02:47.714017  164809 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830 for IP: 192.168.39.173
	I0617 12:02:47.714044  164809 certs.go:194] generating shared ca certs ...
	I0617 12:02:47.714064  164809 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:47.714260  164809 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:02:47.714321  164809 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:02:47.714335  164809 certs.go:256] generating profile certs ...
	I0617 12:02:47.714419  164809 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/client.key
	I0617 12:02:47.714504  164809 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/apiserver.key.d2d5b47b
	I0617 12:02:47.714547  164809 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/proxy-client.key
	I0617 12:02:47.714655  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:02:47.714684  164809 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:02:47.714693  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:02:47.714719  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:02:47.714745  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:02:47.714780  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:02:47.714815  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:47.715578  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:02:47.767301  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:02:47.804542  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:02:47.842670  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:02:47.874533  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0617 12:02:47.909752  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:02:47.940097  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:02:47.965441  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:02:47.990862  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:02:48.015935  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:02:48.041408  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:02:48.066557  164809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:02:48.084630  164809 ssh_runner.go:195] Run: openssl version
	I0617 12:02:48.091098  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:02:48.102447  164809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:02:48.107238  164809 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:02:48.107299  164809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:02:48.113682  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:02:48.124472  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:02:48.135897  164809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:02:48.140859  164809 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:02:48.140915  164809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:02:48.147113  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:02:48.158192  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:02:48.169483  164809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:48.174241  164809 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:48.174294  164809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:48.180093  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:02:48.191082  164809 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:02:48.195770  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:02:48.201743  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:02:48.207452  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:02:48.213492  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:02:48.219435  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:02:48.226202  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:02:48.232291  164809 kubeadm.go:391] StartCluster: {Name:no-preload-152830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-152830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:02:48.232409  164809 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:02:48.232448  164809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:48.272909  164809 cri.go:89] found id: ""
	I0617 12:02:48.272972  164809 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:02:48.284185  164809 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:02:48.284212  164809 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:02:48.284221  164809 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:02:48.284266  164809 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:02:48.294653  164809 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:02:48.296091  164809 kubeconfig.go:125] found "no-preload-152830" server: "https://192.168.39.173:8443"
	I0617 12:02:48.298438  164809 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:02:48.307905  164809 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.173
	I0617 12:02:48.307932  164809 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:02:48.307945  164809 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:02:48.307990  164809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:48.356179  164809 cri.go:89] found id: ""
	I0617 12:02:48.356247  164809 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:02:49.333637  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:51.333927  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:48.127215  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:48.627013  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:49.126439  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:49.626831  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.126521  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.627178  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:51.126830  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:51.627091  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:52.127343  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:52.626635  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:48.724828  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:51.225321  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:48.377824  164809 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:02:48.389213  164809 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:02:48.389236  164809 kubeadm.go:156] found existing configuration files:
	
	I0617 12:02:48.389287  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:02:48.398559  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:02:48.398605  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:02:48.408243  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:02:48.417407  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:02:48.417451  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:02:48.427333  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:02:48.436224  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:02:48.436278  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:02:48.445378  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:02:48.454119  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:02:48.454170  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:02:48.463097  164809 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:02:48.472479  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:48.584018  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.392310  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.599840  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.662845  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.794357  164809 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:49.794459  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.295507  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.794968  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.832967  164809 api_server.go:72] duration metric: took 1.038610813s to wait for apiserver process to appear ...
	I0617 12:02:50.832993  164809 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:02:50.833017  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:50.833494  164809 api_server.go:269] stopped: https://192.168.39.173:8443/healthz: Get "https://192.168.39.173:8443/healthz": dial tcp 192.168.39.173:8443: connect: connection refused
	I0617 12:02:51.333910  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:53.534213  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:53.534246  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:53.534265  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:53.579857  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:53.579887  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:53.833207  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:53.863430  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:53.863485  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:54.333557  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:54.342474  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:54.342507  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:54.834092  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:54.839578  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0617 12:02:54.854075  164809 api_server.go:141] control plane version: v1.30.1
	I0617 12:02:54.854113  164809 api_server.go:131] duration metric: took 4.021112065s to wait for apiserver health ...
	I0617 12:02:54.854124  164809 cni.go:84] Creating CNI manager for ""
	I0617 12:02:54.854133  164809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:54.856029  164809 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:02:53.334898  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:55.834490  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:53.126693  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:53.627110  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:54.126653  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:54.626424  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:55.127113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:55.627373  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:56.126415  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:56.627329  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:57.126797  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:57.627313  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:53.723948  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:56.225000  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:54.857252  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:02:54.914636  164809 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:02:54.961745  164809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:02:54.975140  164809 system_pods.go:59] 8 kube-system pods found
	I0617 12:02:54.975183  164809 system_pods.go:61] "coredns-7db6d8ff4d-7lfns" [83cf7962-1aa7-4de6-9e77-a03dee972ead] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0617 12:02:54.975192  164809 system_pods.go:61] "etcd-no-preload-152830" [27dace2b-9d7d-44e8-8f86-b20ce49c8afa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0617 12:02:54.975202  164809 system_pods.go:61] "kube-apiserver-no-preload-152830" [c102caaf-2289-4171-8b1f-89df4f6edf39] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 12:02:54.975213  164809 system_pods.go:61] "kube-controller-manager-no-preload-152830" [534a8f45-7886-4e12-b728-df686c2f8668] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 12:02:54.975220  164809 system_pods.go:61] "kube-proxy-bblgc" [70fa474e-cb6a-4e31-b978-78b47e9952a8] Running
	I0617 12:02:54.975228  164809 system_pods.go:61] "kube-scheduler-no-preload-152830" [17d696bd-55b3-4080-a63d-944216adf1d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 12:02:54.975240  164809 system_pods.go:61] "metrics-server-569cc877fc-97tqn" [0ce37c88-fd22-4001-96c4-d0f5239c0fd4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:02:54.975253  164809 system_pods.go:61] "storage-provisioner" [61dafb85-965b-4961-b9e1-e3202795caef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 12:02:54.975268  164809 system_pods.go:74] duration metric: took 13.492652ms to wait for pod list to return data ...
	I0617 12:02:54.975279  164809 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:02:54.980820  164809 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:02:54.980842  164809 node_conditions.go:123] node cpu capacity is 2
	I0617 12:02:54.980854  164809 node_conditions.go:105] duration metric: took 5.568037ms to run NodePressure ...
	I0617 12:02:54.980873  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:55.284669  164809 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 12:02:55.289433  164809 kubeadm.go:733] kubelet initialised
	I0617 12:02:55.289453  164809 kubeadm.go:734] duration metric: took 4.759785ms waiting for restarted kubelet to initialise ...
	I0617 12:02:55.289461  164809 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:55.294149  164809 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:55.298081  164809 pod_ready.go:97] node "no-preload-152830" hosting pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.298100  164809 pod_ready.go:81] duration metric: took 3.929974ms for pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:55.298109  164809 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-152830" hosting pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.298116  164809 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:55.302552  164809 pod_ready.go:97] node "no-preload-152830" hosting pod "etcd-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.302572  164809 pod_ready.go:81] duration metric: took 4.444579ms for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:55.302580  164809 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-152830" hosting pod "etcd-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.302585  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:55.306375  164809 pod_ready.go:97] node "no-preload-152830" hosting pod "kube-apiserver-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.306394  164809 pod_ready.go:81] duration metric: took 3.804134ms for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:55.306402  164809 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-152830" hosting pod "kube-apiserver-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.306407  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:57.313002  164809 pod_ready.go:102] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:57.834719  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:00.334129  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:58.126744  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:58.627050  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:59.127300  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:59.626694  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:00.127092  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:00.127182  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:00.166116  165698 cri.go:89] found id: ""
	I0617 12:03:00.166145  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.166153  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:00.166159  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:00.166208  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:00.200990  165698 cri.go:89] found id: ""
	I0617 12:03:00.201020  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.201029  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:00.201034  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:00.201086  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:00.236394  165698 cri.go:89] found id: ""
	I0617 12:03:00.236422  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.236430  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:00.236438  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:00.236496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:00.274257  165698 cri.go:89] found id: ""
	I0617 12:03:00.274285  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.274293  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:00.274299  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:00.274350  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:00.307425  165698 cri.go:89] found id: ""
	I0617 12:03:00.307452  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.307481  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:00.307490  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:00.307557  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:00.343420  165698 cri.go:89] found id: ""
	I0617 12:03:00.343446  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.343472  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:00.343480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:00.343541  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:00.378301  165698 cri.go:89] found id: ""
	I0617 12:03:00.378325  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.378333  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:00.378338  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:00.378383  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:00.414985  165698 cri.go:89] found id: ""
	I0617 12:03:00.415011  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.415018  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:00.415033  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:00.415090  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:00.468230  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:00.468262  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:00.481970  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:00.481998  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:00.612881  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:00.612911  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:00.612929  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:00.676110  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:00.676145  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:02:58.725617  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:01.225227  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:59.818063  164809 pod_ready.go:102] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:02.312898  164809 pod_ready.go:102] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:03.313300  164809 pod_ready.go:92] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:03:03.313332  164809 pod_ready.go:81] duration metric: took 8.006915719s for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:03.313347  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bblgc" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:03.319094  164809 pod_ready.go:92] pod "kube-proxy-bblgc" in "kube-system" namespace has status "Ready":"True"
	I0617 12:03:03.319116  164809 pod_ready.go:81] duration metric: took 5.762584ms for pod "kube-proxy-bblgc" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:03.319137  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:02.833031  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:04.834158  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:07.334894  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:03.216960  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:03.231208  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:03.231277  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:03.267056  165698 cri.go:89] found id: ""
	I0617 12:03:03.267088  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.267096  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:03.267103  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:03.267152  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:03.302797  165698 cri.go:89] found id: ""
	I0617 12:03:03.302832  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.302844  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:03.302852  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:03.302905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:03.343401  165698 cri.go:89] found id: ""
	I0617 12:03:03.343435  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.343445  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:03.343465  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:03.343530  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:03.380841  165698 cri.go:89] found id: ""
	I0617 12:03:03.380871  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.380883  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:03.380890  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:03.380951  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:03.420098  165698 cri.go:89] found id: ""
	I0617 12:03:03.420130  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.420142  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:03.420150  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:03.420213  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:03.458476  165698 cri.go:89] found id: ""
	I0617 12:03:03.458506  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.458515  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:03.458521  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:03.458586  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:03.497127  165698 cri.go:89] found id: ""
	I0617 12:03:03.497156  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.497164  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:03.497170  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:03.497217  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:03.538759  165698 cri.go:89] found id: ""
	I0617 12:03:03.538794  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.538806  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:03.538825  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:03.538841  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:03.584701  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:03.584743  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:03.636981  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:03.637030  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:03.670032  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:03.670077  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:03.757012  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:03.757038  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:03.757056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:06.327680  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:06.341998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:06.342068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:06.383353  165698 cri.go:89] found id: ""
	I0617 12:03:06.383385  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.383394  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:06.383400  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:06.383448  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:06.418806  165698 cri.go:89] found id: ""
	I0617 12:03:06.418850  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.418862  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:06.418870  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:06.418945  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:06.458151  165698 cri.go:89] found id: ""
	I0617 12:03:06.458192  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.458204  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:06.458219  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:06.458289  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:06.496607  165698 cri.go:89] found id: ""
	I0617 12:03:06.496637  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.496645  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:06.496651  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:06.496703  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:06.534900  165698 cri.go:89] found id: ""
	I0617 12:03:06.534938  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.534951  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:06.534959  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:06.535017  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:06.572388  165698 cri.go:89] found id: ""
	I0617 12:03:06.572413  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.572422  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:06.572428  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:06.572496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:06.608072  165698 cri.go:89] found id: ""
	I0617 12:03:06.608104  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.608115  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:06.608121  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:06.608175  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:06.647727  165698 cri.go:89] found id: ""
	I0617 12:03:06.647760  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.647772  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:06.647784  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:06.647800  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:06.720887  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:06.720919  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:06.761128  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:06.761153  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:06.815524  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:06.815557  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:06.830275  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:06.830304  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:06.907861  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:03.725650  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:06.225601  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:05.327062  164809 pod_ready.go:102] pod "kube-scheduler-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:07.325033  164809 pod_ready.go:92] pod "kube-scheduler-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:03:07.325061  164809 pod_ready.go:81] duration metric: took 4.005914462s for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:07.325072  164809 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:09.835374  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:12.334481  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:09.408117  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:09.420916  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:09.420978  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:09.453830  165698 cri.go:89] found id: ""
	I0617 12:03:09.453860  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.453870  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:09.453878  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:09.453937  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:09.492721  165698 cri.go:89] found id: ""
	I0617 12:03:09.492756  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.492766  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:09.492775  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:09.492849  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:09.530956  165698 cri.go:89] found id: ""
	I0617 12:03:09.530984  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.530995  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:09.531001  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:09.531067  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:09.571534  165698 cri.go:89] found id: ""
	I0617 12:03:09.571564  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.571576  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:09.571584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:09.571646  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:09.609740  165698 cri.go:89] found id: ""
	I0617 12:03:09.609776  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.609788  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:09.609797  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:09.609864  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:09.649958  165698 cri.go:89] found id: ""
	I0617 12:03:09.649998  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.650010  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:09.650020  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:09.650087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:09.706495  165698 cri.go:89] found id: ""
	I0617 12:03:09.706532  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.706544  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:09.706553  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:09.706638  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:09.742513  165698 cri.go:89] found id: ""
	I0617 12:03:09.742541  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.742549  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:09.742559  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:09.742571  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:09.756470  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:09.756502  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:09.840878  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:09.840897  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:09.840913  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:09.922329  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:09.922370  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:09.967536  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:09.967573  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:12.521031  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:12.534507  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:12.534595  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:12.569895  165698 cri.go:89] found id: ""
	I0617 12:03:12.569930  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.569942  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:12.569950  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:12.570005  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:12.606857  165698 cri.go:89] found id: ""
	I0617 12:03:12.606888  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.606900  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:12.606922  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:12.606998  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:12.640781  165698 cri.go:89] found id: ""
	I0617 12:03:12.640807  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.640818  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:12.640826  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:12.640910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:12.674097  165698 cri.go:89] found id: ""
	I0617 12:03:12.674124  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.674134  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:12.674142  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:12.674201  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:12.708662  165698 cri.go:89] found id: ""
	I0617 12:03:12.708689  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.708699  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:12.708707  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:12.708791  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:12.744891  165698 cri.go:89] found id: ""
	I0617 12:03:12.744927  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.744938  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:12.744947  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:12.745010  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:12.778440  165698 cri.go:89] found id: ""
	I0617 12:03:12.778466  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.778474  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:12.778480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:12.778528  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:12.814733  165698 cri.go:89] found id: ""
	I0617 12:03:12.814762  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.814770  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:12.814780  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:12.814820  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:12.887741  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:12.887762  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:12.887775  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:12.968439  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:12.968476  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:08.725485  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:11.224357  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:09.331004  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:11.331666  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:13.332269  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:14.335086  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:16.836397  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:13.008926  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:13.008955  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:13.060432  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:13.060468  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:15.575450  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:15.589178  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:15.589244  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:15.625554  165698 cri.go:89] found id: ""
	I0617 12:03:15.625589  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.625601  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:15.625608  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:15.625668  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:15.659023  165698 cri.go:89] found id: ""
	I0617 12:03:15.659054  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.659066  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:15.659074  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:15.659138  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:15.693777  165698 cri.go:89] found id: ""
	I0617 12:03:15.693803  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.693811  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:15.693817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:15.693875  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:15.729098  165698 cri.go:89] found id: ""
	I0617 12:03:15.729133  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.729141  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:15.729147  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:15.729194  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:15.762639  165698 cri.go:89] found id: ""
	I0617 12:03:15.762668  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.762679  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:15.762687  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:15.762744  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:15.797446  165698 cri.go:89] found id: ""
	I0617 12:03:15.797475  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.797484  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:15.797489  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:15.797537  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:15.832464  165698 cri.go:89] found id: ""
	I0617 12:03:15.832503  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.832513  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:15.832521  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:15.832579  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:15.867868  165698 cri.go:89] found id: ""
	I0617 12:03:15.867898  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.867906  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:15.867916  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:15.867928  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:15.882151  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:15.882181  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:15.946642  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:15.946666  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:15.946682  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:16.027062  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:16.027098  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:16.082704  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:16.082735  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:13.725854  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:16.225670  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:15.333470  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:17.832368  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:19.334102  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:21.334529  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:18.651554  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:18.665096  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:18.665166  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:18.703099  165698 cri.go:89] found id: ""
	I0617 12:03:18.703127  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.703138  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:18.703147  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:18.703210  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:18.737945  165698 cri.go:89] found id: ""
	I0617 12:03:18.737985  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.737997  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:18.738005  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:18.738079  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:18.777145  165698 cri.go:89] found id: ""
	I0617 12:03:18.777172  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.777181  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:18.777187  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:18.777255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:18.813171  165698 cri.go:89] found id: ""
	I0617 12:03:18.813198  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.813207  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:18.813213  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:18.813270  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:18.854459  165698 cri.go:89] found id: ""
	I0617 12:03:18.854490  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.854501  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:18.854510  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:18.854607  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:18.893668  165698 cri.go:89] found id: ""
	I0617 12:03:18.893703  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.893712  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:18.893718  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:18.893796  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:18.928919  165698 cri.go:89] found id: ""
	I0617 12:03:18.928971  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.928983  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:18.928993  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:18.929068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:18.965770  165698 cri.go:89] found id: ""
	I0617 12:03:18.965800  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.965808  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:18.965817  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:18.965829  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:19.020348  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:19.020392  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:19.034815  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:19.034845  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:19.109617  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:19.109643  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:19.109660  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:19.186843  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:19.186890  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:21.732720  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:21.747032  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:21.747113  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:21.789962  165698 cri.go:89] found id: ""
	I0617 12:03:21.789991  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.789999  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:21.790011  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:21.790066  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:21.833865  165698 cri.go:89] found id: ""
	I0617 12:03:21.833903  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.833913  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:21.833921  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:21.833985  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:21.903891  165698 cri.go:89] found id: ""
	I0617 12:03:21.903929  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.903941  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:21.903950  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:21.904020  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:21.941369  165698 cri.go:89] found id: ""
	I0617 12:03:21.941396  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.941407  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:21.941415  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:21.941473  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:21.977767  165698 cri.go:89] found id: ""
	I0617 12:03:21.977797  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.977808  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:21.977817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:21.977880  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:22.016422  165698 cri.go:89] found id: ""
	I0617 12:03:22.016450  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.016463  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:22.016471  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:22.016536  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:22.056871  165698 cri.go:89] found id: ""
	I0617 12:03:22.056904  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.056914  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:22.056922  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:22.056982  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:22.093244  165698 cri.go:89] found id: ""
	I0617 12:03:22.093288  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.093300  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:22.093313  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:22.093331  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:22.144722  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:22.144756  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:22.159047  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:22.159084  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:22.232077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:22.232100  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:22.232112  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:22.308241  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:22.308276  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:18.724648  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:21.224616  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:19.832543  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:21.838952  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:23.834640  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:26.336770  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:24.851740  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:24.866597  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:24.866659  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:24.902847  165698 cri.go:89] found id: ""
	I0617 12:03:24.902879  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.902892  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:24.902900  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:24.902973  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:24.940042  165698 cri.go:89] found id: ""
	I0617 12:03:24.940079  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.940088  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:24.940094  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:24.940150  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:24.975160  165698 cri.go:89] found id: ""
	I0617 12:03:24.975190  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.975202  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:24.975211  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:24.975280  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:25.012618  165698 cri.go:89] found id: ""
	I0617 12:03:25.012649  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.012657  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:25.012663  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:25.012712  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:25.051166  165698 cri.go:89] found id: ""
	I0617 12:03:25.051210  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.051223  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:25.051230  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:25.051309  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:25.090112  165698 cri.go:89] found id: ""
	I0617 12:03:25.090144  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.090156  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:25.090164  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:25.090230  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:25.133258  165698 cri.go:89] found id: ""
	I0617 12:03:25.133285  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.133294  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:25.133301  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:25.133366  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:25.177445  165698 cri.go:89] found id: ""
	I0617 12:03:25.177473  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.177481  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:25.177490  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:25.177505  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:25.250685  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:25.250710  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:25.250727  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:25.335554  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:25.335586  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:25.377058  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:25.377093  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:25.431425  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:25.431471  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:27.945063  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:27.959396  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:27.959469  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:23.725126  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:26.224114  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:28.224895  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:23.840550  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:26.333142  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:28.334577  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:28.337133  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:30.834142  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:27.994554  165698 cri.go:89] found id: ""
	I0617 12:03:27.994582  165698 logs.go:276] 0 containers: []
	W0617 12:03:27.994591  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:27.994598  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:27.994660  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:28.030168  165698 cri.go:89] found id: ""
	I0617 12:03:28.030200  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.030208  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:28.030215  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:28.030263  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:28.066213  165698 cri.go:89] found id: ""
	I0617 12:03:28.066244  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.066255  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:28.066261  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:28.066322  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:28.102855  165698 cri.go:89] found id: ""
	I0617 12:03:28.102880  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.102888  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:28.102894  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:28.102942  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:28.138698  165698 cri.go:89] found id: ""
	I0617 12:03:28.138734  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.138748  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:28.138755  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:28.138815  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:28.173114  165698 cri.go:89] found id: ""
	I0617 12:03:28.173140  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.173148  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:28.173154  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:28.173213  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:28.208901  165698 cri.go:89] found id: ""
	I0617 12:03:28.208936  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.208947  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:28.208955  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:28.209016  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:28.244634  165698 cri.go:89] found id: ""
	I0617 12:03:28.244667  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.244678  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:28.244687  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:28.244699  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:28.300303  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:28.300336  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:28.314227  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:28.314272  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:28.394322  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:28.394350  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:28.394367  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:28.483381  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:28.483413  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:31.026433  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:31.040820  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:31.040888  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:31.086409  165698 cri.go:89] found id: ""
	I0617 12:03:31.086440  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.086453  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:31.086461  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:31.086548  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:31.122810  165698 cri.go:89] found id: ""
	I0617 12:03:31.122836  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.122843  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:31.122849  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:31.122910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:31.157634  165698 cri.go:89] found id: ""
	I0617 12:03:31.157669  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.157680  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:31.157687  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:31.157750  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:31.191498  165698 cri.go:89] found id: ""
	I0617 12:03:31.191529  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.191541  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:31.191549  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:31.191619  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:31.225575  165698 cri.go:89] found id: ""
	I0617 12:03:31.225599  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.225609  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:31.225616  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:31.225670  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:31.269780  165698 cri.go:89] found id: ""
	I0617 12:03:31.269810  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.269819  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:31.269825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:31.269874  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:31.307689  165698 cri.go:89] found id: ""
	I0617 12:03:31.307717  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.307726  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:31.307733  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:31.307789  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:31.344160  165698 cri.go:89] found id: ""
	I0617 12:03:31.344190  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.344200  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:31.344210  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:31.344223  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:31.397627  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:31.397667  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:31.411316  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:31.411347  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:31.486258  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:31.486280  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:31.486297  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:31.568067  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:31.568106  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:30.725183  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:33.224294  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:30.834377  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:33.333070  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:33.335067  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:35.335626  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:37.336117  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:34.111424  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:34.127178  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:34.127255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:34.165900  165698 cri.go:89] found id: ""
	I0617 12:03:34.165936  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.165947  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:34.165955  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:34.166042  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:34.203556  165698 cri.go:89] found id: ""
	I0617 12:03:34.203588  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.203597  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:34.203606  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:34.203659  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:34.243418  165698 cri.go:89] found id: ""
	I0617 12:03:34.243478  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.243490  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:34.243499  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:34.243661  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:34.281542  165698 cri.go:89] found id: ""
	I0617 12:03:34.281569  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.281577  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:34.281582  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:34.281635  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:34.316304  165698 cri.go:89] found id: ""
	I0617 12:03:34.316333  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.316341  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:34.316347  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:34.316403  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:34.357416  165698 cri.go:89] found id: ""
	I0617 12:03:34.357455  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.357467  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:34.357476  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:34.357547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:34.392069  165698 cri.go:89] found id: ""
	I0617 12:03:34.392101  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.392112  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:34.392120  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:34.392185  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:34.427203  165698 cri.go:89] found id: ""
	I0617 12:03:34.427235  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.427247  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:34.427258  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:34.427317  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:34.441346  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:34.441375  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:34.519306  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:34.519331  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:34.519349  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:34.598802  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:34.598843  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:34.637521  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:34.637554  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:37.191259  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:37.205882  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:37.205947  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:37.242175  165698 cri.go:89] found id: ""
	I0617 12:03:37.242202  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.242209  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:37.242215  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:37.242278  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:37.278004  165698 cri.go:89] found id: ""
	I0617 12:03:37.278029  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.278037  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:37.278043  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:37.278091  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:37.322148  165698 cri.go:89] found id: ""
	I0617 12:03:37.322179  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.322190  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:37.322198  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:37.322259  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:37.358612  165698 cri.go:89] found id: ""
	I0617 12:03:37.358638  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.358649  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:37.358657  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:37.358718  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:37.393070  165698 cri.go:89] found id: ""
	I0617 12:03:37.393104  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.393115  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:37.393123  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:37.393187  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:37.429420  165698 cri.go:89] found id: ""
	I0617 12:03:37.429452  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.429465  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:37.429475  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:37.429541  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:37.464485  165698 cri.go:89] found id: ""
	I0617 12:03:37.464509  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.464518  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:37.464523  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:37.464584  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:37.501283  165698 cri.go:89] found id: ""
	I0617 12:03:37.501308  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.501316  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:37.501326  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:37.501338  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:37.552848  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:37.552889  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:37.566715  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:37.566746  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:37.643560  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:37.643584  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:37.643601  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:37.722895  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:37.722935  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:35.225442  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:37.225962  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:35.836693  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:38.332297  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:39.834655  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:42.333686  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:40.268199  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:40.281832  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:40.281905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:40.317094  165698 cri.go:89] found id: ""
	I0617 12:03:40.317137  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.317150  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:40.317159  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:40.317229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:40.355786  165698 cri.go:89] found id: ""
	I0617 12:03:40.355819  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.355829  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:40.355836  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:40.355903  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:40.394282  165698 cri.go:89] found id: ""
	I0617 12:03:40.394312  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.394323  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:40.394332  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:40.394388  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:40.433773  165698 cri.go:89] found id: ""
	I0617 12:03:40.433806  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.433817  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:40.433825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:40.433875  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:40.469937  165698 cri.go:89] found id: ""
	I0617 12:03:40.469973  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.469985  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:40.469998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:40.470067  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:40.503565  165698 cri.go:89] found id: ""
	I0617 12:03:40.503590  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.503599  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:40.503605  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:40.503654  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:40.538349  165698 cri.go:89] found id: ""
	I0617 12:03:40.538383  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.538394  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:40.538402  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:40.538461  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:40.576036  165698 cri.go:89] found id: ""
	I0617 12:03:40.576066  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.576075  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:40.576085  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:40.576100  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:40.617804  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:40.617833  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:40.668126  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:40.668162  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:40.682618  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:40.682655  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:40.759597  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:40.759619  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:40.759638  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:39.725534  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:42.223320  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:40.336855  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:42.832597  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:44.334430  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:46.835809  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:43.343404  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:43.357886  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:43.357953  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:43.398262  165698 cri.go:89] found id: ""
	I0617 12:03:43.398290  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.398301  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:43.398310  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:43.398370  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:43.432241  165698 cri.go:89] found id: ""
	I0617 12:03:43.432272  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.432280  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:43.432289  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:43.432348  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:43.466210  165698 cri.go:89] found id: ""
	I0617 12:03:43.466234  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.466241  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:43.466247  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:43.466294  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:43.501677  165698 cri.go:89] found id: ""
	I0617 12:03:43.501711  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.501723  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:43.501731  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:43.501793  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:43.541826  165698 cri.go:89] found id: ""
	I0617 12:03:43.541860  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.541870  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:43.541876  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:43.541941  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:43.576940  165698 cri.go:89] found id: ""
	I0617 12:03:43.576962  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.576970  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:43.576975  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:43.577022  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:43.612592  165698 cri.go:89] found id: ""
	I0617 12:03:43.612627  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.612635  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:43.612643  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:43.612694  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:43.647141  165698 cri.go:89] found id: ""
	I0617 12:03:43.647176  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.647188  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:43.647202  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:43.647220  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:43.698248  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:43.698283  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:43.711686  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:43.711714  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:43.787077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:43.787101  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:43.787115  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:43.861417  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:43.861455  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:46.402594  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:46.417108  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:46.417185  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:46.453910  165698 cri.go:89] found id: ""
	I0617 12:03:46.453941  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.453952  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:46.453960  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:46.454020  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:46.487239  165698 cri.go:89] found id: ""
	I0617 12:03:46.487268  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.487280  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:46.487288  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:46.487353  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:46.521824  165698 cri.go:89] found id: ""
	I0617 12:03:46.521850  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.521859  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:46.521866  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:46.521929  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:46.557247  165698 cri.go:89] found id: ""
	I0617 12:03:46.557274  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.557282  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:46.557289  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:46.557350  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:46.600354  165698 cri.go:89] found id: ""
	I0617 12:03:46.600383  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.600393  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:46.600402  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:46.600477  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:46.638153  165698 cri.go:89] found id: ""
	I0617 12:03:46.638180  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.638189  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:46.638197  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:46.638255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:46.672636  165698 cri.go:89] found id: ""
	I0617 12:03:46.672661  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.672669  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:46.672675  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:46.672721  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:46.706431  165698 cri.go:89] found id: ""
	I0617 12:03:46.706468  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.706481  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:46.706493  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:46.706509  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:46.720796  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:46.720842  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:46.801343  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:46.801365  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:46.801379  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:46.883651  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:46.883696  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:46.928594  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:46.928630  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:44.224037  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:46.224076  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:48.224472  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:45.332811  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:47.832461  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:49.334678  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:51.833994  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:49.480413  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:49.495558  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:49.495656  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:49.533281  165698 cri.go:89] found id: ""
	I0617 12:03:49.533313  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.533323  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:49.533330  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:49.533396  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:49.573430  165698 cri.go:89] found id: ""
	I0617 12:03:49.573457  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.573465  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:49.573472  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:49.573532  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:49.608669  165698 cri.go:89] found id: ""
	I0617 12:03:49.608697  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.608705  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:49.608711  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:49.608767  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:49.643411  165698 cri.go:89] found id: ""
	I0617 12:03:49.643449  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.643481  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:49.643490  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:49.643557  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:49.680039  165698 cri.go:89] found id: ""
	I0617 12:03:49.680071  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.680082  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:49.680090  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:49.680148  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:49.717169  165698 cri.go:89] found id: ""
	I0617 12:03:49.717195  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.717203  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:49.717209  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:49.717262  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:49.754585  165698 cri.go:89] found id: ""
	I0617 12:03:49.754615  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.754625  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:49.754633  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:49.754697  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:49.796040  165698 cri.go:89] found id: ""
	I0617 12:03:49.796074  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.796085  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:49.796097  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:49.796112  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:49.873496  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:49.873530  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:49.873547  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:49.961883  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:49.961925  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:50.002975  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:50.003004  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:50.054185  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:50.054224  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:52.568557  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:52.584264  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:52.584337  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:52.622474  165698 cri.go:89] found id: ""
	I0617 12:03:52.622501  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.622509  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:52.622516  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:52.622566  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:52.661012  165698 cri.go:89] found id: ""
	I0617 12:03:52.661045  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.661057  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:52.661066  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:52.661133  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:52.700950  165698 cri.go:89] found id: ""
	I0617 12:03:52.700986  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.700998  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:52.701006  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:52.701075  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:52.735663  165698 cri.go:89] found id: ""
	I0617 12:03:52.735689  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.735696  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:52.735702  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:52.735768  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:52.776540  165698 cri.go:89] found id: ""
	I0617 12:03:52.776568  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.776580  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:52.776589  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:52.776642  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:52.812439  165698 cri.go:89] found id: ""
	I0617 12:03:52.812474  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.812493  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:52.812503  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:52.812567  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:52.849233  165698 cri.go:89] found id: ""
	I0617 12:03:52.849263  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.849273  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:52.849281  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:52.849343  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:52.885365  165698 cri.go:89] found id: ""
	I0617 12:03:52.885395  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.885406  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:52.885419  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:52.885434  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:52.941521  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:52.941553  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:52.955958  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:52.955997  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:03:50.224702  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:52.724247  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:50.332871  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:52.832386  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:53.834382  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:55.834813  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	W0617 12:03:53.029254  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:53.029278  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:53.029291  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:53.104391  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:53.104425  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:55.648578  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:55.662143  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:55.662205  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:55.697623  165698 cri.go:89] found id: ""
	I0617 12:03:55.697662  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.697674  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:55.697682  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:55.697751  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:55.734132  165698 cri.go:89] found id: ""
	I0617 12:03:55.734171  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.734184  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:55.734192  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:55.734265  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:55.774178  165698 cri.go:89] found id: ""
	I0617 12:03:55.774212  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.774222  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:55.774231  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:55.774296  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:55.816427  165698 cri.go:89] found id: ""
	I0617 12:03:55.816460  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.816471  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:55.816480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:55.816546  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:55.860413  165698 cri.go:89] found id: ""
	I0617 12:03:55.860446  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.860457  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:55.860465  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:55.860532  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:55.897577  165698 cri.go:89] found id: ""
	I0617 12:03:55.897612  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.897622  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:55.897629  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:55.897682  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:55.934163  165698 cri.go:89] found id: ""
	I0617 12:03:55.934200  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.934212  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:55.934220  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:55.934291  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:55.972781  165698 cri.go:89] found id: ""
	I0617 12:03:55.972827  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.972840  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:55.972852  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:55.972867  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:56.027292  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:56.027332  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:56.042304  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:56.042336  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:56.115129  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:56.115159  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:56.115176  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:56.194161  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:56.194200  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:54.728169  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:57.225361  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:54.837170  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:57.333566  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:58.335846  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:00.833987  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:58.734681  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:58.748467  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:58.748534  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:58.786191  165698 cri.go:89] found id: ""
	I0617 12:03:58.786221  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.786232  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:58.786239  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:58.786302  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:58.822076  165698 cri.go:89] found id: ""
	I0617 12:03:58.822103  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.822125  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:58.822134  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:58.822199  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:58.858830  165698 cri.go:89] found id: ""
	I0617 12:03:58.858859  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.858867  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:58.858873  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:58.858927  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:58.898802  165698 cri.go:89] found id: ""
	I0617 12:03:58.898830  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.898838  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:58.898844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:58.898891  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:58.933234  165698 cri.go:89] found id: ""
	I0617 12:03:58.933269  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.933281  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:58.933289  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:58.933355  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:58.973719  165698 cri.go:89] found id: ""
	I0617 12:03:58.973753  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.973766  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:58.973773  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:58.973847  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:59.010671  165698 cri.go:89] found id: ""
	I0617 12:03:59.010722  165698 logs.go:276] 0 containers: []
	W0617 12:03:59.010734  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:59.010741  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:59.010805  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:59.047318  165698 cri.go:89] found id: ""
	I0617 12:03:59.047347  165698 logs.go:276] 0 containers: []
	W0617 12:03:59.047359  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:59.047372  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:59.047389  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:59.097778  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:59.097815  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:59.111615  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:59.111646  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:59.193172  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:59.193195  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:59.193207  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:59.268147  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:59.268182  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:01.807585  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:01.821634  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:01.821694  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:01.857610  165698 cri.go:89] found id: ""
	I0617 12:04:01.857637  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.857647  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:01.857654  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:01.857710  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:01.893229  165698 cri.go:89] found id: ""
	I0617 12:04:01.893253  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.893261  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:01.893267  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:01.893324  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:01.926916  165698 cri.go:89] found id: ""
	I0617 12:04:01.926940  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.926950  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:01.926958  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:01.927017  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:01.961913  165698 cri.go:89] found id: ""
	I0617 12:04:01.961946  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.961957  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:01.961967  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:01.962045  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:01.997084  165698 cri.go:89] found id: ""
	I0617 12:04:01.997111  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.997119  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:01.997125  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:01.997173  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:02.034640  165698 cri.go:89] found id: ""
	I0617 12:04:02.034666  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.034674  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:02.034680  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:02.034744  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:02.085868  165698 cri.go:89] found id: ""
	I0617 12:04:02.085910  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.085920  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:02.085928  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:02.085983  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:02.152460  165698 cri.go:89] found id: ""
	I0617 12:04:02.152487  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.152499  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:02.152513  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:02.152528  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:02.205297  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:02.205344  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:02.222312  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:02.222348  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:02.299934  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:02.299959  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:02.299977  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:02.384008  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:02.384056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:59.724730  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:02.227215  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:59.833621  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:01.833799  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:02.834076  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:04.836418  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:07.335024  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:04.926889  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:04.940643  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:04.940722  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:04.976246  165698 cri.go:89] found id: ""
	I0617 12:04:04.976275  165698 logs.go:276] 0 containers: []
	W0617 12:04:04.976283  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:04.976289  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:04.976338  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:05.015864  165698 cri.go:89] found id: ""
	I0617 12:04:05.015900  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.015913  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:05.015921  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:05.015985  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:05.054051  165698 cri.go:89] found id: ""
	I0617 12:04:05.054086  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.054099  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:05.054112  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:05.054177  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:05.090320  165698 cri.go:89] found id: ""
	I0617 12:04:05.090358  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.090371  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:05.090380  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:05.090438  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:05.126963  165698 cri.go:89] found id: ""
	I0617 12:04:05.126998  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.127008  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:05.127015  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:05.127087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:05.162565  165698 cri.go:89] found id: ""
	I0617 12:04:05.162600  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.162611  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:05.162620  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:05.162674  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:05.195706  165698 cri.go:89] found id: ""
	I0617 12:04:05.195743  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.195752  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:05.195758  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:05.195826  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:05.236961  165698 cri.go:89] found id: ""
	I0617 12:04:05.236995  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.237006  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:05.237016  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:05.237034  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:05.252754  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:05.252783  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:05.327832  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:05.327870  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:05.327886  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:05.410220  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:05.410271  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:05.451291  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:05.451324  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:04.725172  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:07.223627  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:04.332177  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:06.831712  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:09.834563  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:12.334095  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:08.003058  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:08.016611  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:08.016670  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:08.052947  165698 cri.go:89] found id: ""
	I0617 12:04:08.052984  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.052996  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:08.053004  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:08.053057  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:08.086668  165698 cri.go:89] found id: ""
	I0617 12:04:08.086695  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.086704  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:08.086711  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:08.086773  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:08.127708  165698 cri.go:89] found id: ""
	I0617 12:04:08.127738  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.127746  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:08.127752  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:08.127814  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:08.162930  165698 cri.go:89] found id: ""
	I0617 12:04:08.162959  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.162966  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:08.162973  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:08.163026  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:08.196757  165698 cri.go:89] found id: ""
	I0617 12:04:08.196782  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.196791  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:08.196797  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:08.196851  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:08.229976  165698 cri.go:89] found id: ""
	I0617 12:04:08.230006  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.230016  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:08.230022  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:08.230083  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:08.265969  165698 cri.go:89] found id: ""
	I0617 12:04:08.266000  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.266007  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:08.266013  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:08.266071  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:08.299690  165698 cri.go:89] found id: ""
	I0617 12:04:08.299717  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.299728  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:08.299741  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:08.299761  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:08.353399  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:08.353429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:08.366713  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:08.366739  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:08.442727  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:08.442768  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:08.442786  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:08.527832  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:08.527875  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:11.073616  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:11.087085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:11.087172  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:11.121706  165698 cri.go:89] found id: ""
	I0617 12:04:11.121745  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.121756  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:11.121765  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:11.121839  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:11.157601  165698 cri.go:89] found id: ""
	I0617 12:04:11.157637  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.157648  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:11.157657  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:11.157719  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:11.191929  165698 cri.go:89] found id: ""
	I0617 12:04:11.191963  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.191975  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:11.191983  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:11.192045  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:11.228391  165698 cri.go:89] found id: ""
	I0617 12:04:11.228416  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.228429  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:11.228437  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:11.228497  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:11.261880  165698 cri.go:89] found id: ""
	I0617 12:04:11.261911  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.261924  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:11.261932  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:11.261998  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:11.294615  165698 cri.go:89] found id: ""
	I0617 12:04:11.294663  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.294676  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:11.294684  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:11.294745  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:11.332813  165698 cri.go:89] found id: ""
	I0617 12:04:11.332840  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.332847  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:11.332854  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:11.332911  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:11.369032  165698 cri.go:89] found id: ""
	I0617 12:04:11.369060  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.369068  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:11.369078  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:11.369090  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:11.422522  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:11.422555  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:11.436961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:11.436990  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:11.508679  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:11.508700  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:11.508713  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:11.586574  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:11.586610  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:09.224727  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:11.225763  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:09.330868  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:11.332256  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:14.335171  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:16.836514  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:14.127034  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:14.143228  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:14.143306  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:14.178368  165698 cri.go:89] found id: ""
	I0617 12:04:14.178396  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.178405  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:14.178410  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:14.178459  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:14.209971  165698 cri.go:89] found id: ""
	I0617 12:04:14.210001  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.210010  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:14.210015  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:14.210065  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:14.244888  165698 cri.go:89] found id: ""
	I0617 12:04:14.244922  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.244933  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:14.244940  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:14.244999  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:14.277875  165698 cri.go:89] found id: ""
	I0617 12:04:14.277904  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.277914  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:14.277922  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:14.277983  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:14.312698  165698 cri.go:89] found id: ""
	I0617 12:04:14.312724  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.312733  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:14.312739  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:14.312789  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:14.350952  165698 cri.go:89] found id: ""
	I0617 12:04:14.350977  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.350987  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:14.350993  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:14.351056  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:14.389211  165698 cri.go:89] found id: ""
	I0617 12:04:14.389235  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.389243  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:14.389250  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:14.389297  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:14.426171  165698 cri.go:89] found id: ""
	I0617 12:04:14.426200  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.426211  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:14.426224  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:14.426240  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:14.500403  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:14.500430  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:14.500446  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:14.588041  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:14.588078  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:14.631948  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:14.631987  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:14.681859  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:14.681895  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:17.198754  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:17.212612  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:17.212679  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:17.251011  165698 cri.go:89] found id: ""
	I0617 12:04:17.251041  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.251056  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:17.251065  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:17.251128  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:17.282964  165698 cri.go:89] found id: ""
	I0617 12:04:17.282989  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.282998  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:17.283003  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:17.283060  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:17.315570  165698 cri.go:89] found id: ""
	I0617 12:04:17.315601  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.315622  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:17.315630  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:17.315691  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:17.351186  165698 cri.go:89] found id: ""
	I0617 12:04:17.351212  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.351221  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:17.351228  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:17.351287  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:17.385609  165698 cri.go:89] found id: ""
	I0617 12:04:17.385653  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.385665  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:17.385673  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:17.385741  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:17.423890  165698 cri.go:89] found id: ""
	I0617 12:04:17.423923  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.423935  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:17.423944  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:17.424000  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:17.459543  165698 cri.go:89] found id: ""
	I0617 12:04:17.459575  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.459584  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:17.459592  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:17.459660  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:17.495554  165698 cri.go:89] found id: ""
	I0617 12:04:17.495584  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.495594  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:17.495606  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:17.495632  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:17.547835  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:17.547881  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:17.562391  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:17.562422  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:17.635335  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:17.635368  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:17.635387  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:17.708946  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:17.708988  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:13.724618  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:16.224689  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:13.832533  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:15.833210  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:17.841693  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:19.336775  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:21.835598  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:20.249833  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:20.266234  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:20.266301  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:20.307380  165698 cri.go:89] found id: ""
	I0617 12:04:20.307415  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.307424  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:20.307431  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:20.307508  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:20.347193  165698 cri.go:89] found id: ""
	I0617 12:04:20.347225  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.347235  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:20.347243  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:20.347311  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:20.382673  165698 cri.go:89] found id: ""
	I0617 12:04:20.382711  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.382724  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:20.382732  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:20.382800  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:20.419542  165698 cri.go:89] found id: ""
	I0617 12:04:20.419573  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.419582  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:20.419588  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:20.419652  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:20.454586  165698 cri.go:89] found id: ""
	I0617 12:04:20.454618  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.454629  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:20.454636  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:20.454708  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:20.501094  165698 cri.go:89] found id: ""
	I0617 12:04:20.501123  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.501131  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:20.501137  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:20.501190  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:20.537472  165698 cri.go:89] found id: ""
	I0617 12:04:20.537512  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.537524  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:20.537532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:20.537597  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:20.571477  165698 cri.go:89] found id: ""
	I0617 12:04:20.571509  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.571519  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:20.571532  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:20.571550  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:20.611503  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:20.611540  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:20.663868  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:20.663905  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:20.677679  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:20.677704  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:20.753645  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:20.753663  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:20.753689  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:18.725428  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:21.224314  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:20.333214  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:22.333294  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:24.333835  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:26.335344  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:23.335535  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:23.349700  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:23.349766  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:23.384327  165698 cri.go:89] found id: ""
	I0617 12:04:23.384351  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.384358  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:23.384364  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:23.384417  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:23.427145  165698 cri.go:89] found id: ""
	I0617 12:04:23.427179  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.427190  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:23.427197  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:23.427254  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:23.461484  165698 cri.go:89] found id: ""
	I0617 12:04:23.461511  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.461522  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:23.461532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:23.461600  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:23.501292  165698 cri.go:89] found id: ""
	I0617 12:04:23.501324  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.501334  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:23.501342  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:23.501407  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:23.537605  165698 cri.go:89] found id: ""
	I0617 12:04:23.537639  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.537649  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:23.537654  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:23.537727  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:23.576580  165698 cri.go:89] found id: ""
	I0617 12:04:23.576608  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.576616  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:23.576623  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:23.576685  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:23.613124  165698 cri.go:89] found id: ""
	I0617 12:04:23.613153  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.613161  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:23.613167  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:23.613216  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:23.648662  165698 cri.go:89] found id: ""
	I0617 12:04:23.648688  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.648695  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:23.648705  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:23.648717  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:23.661737  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:23.661762  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:23.732512  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:23.732531  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:23.732547  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:23.810165  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:23.810207  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:23.855099  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:23.855136  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:26.406038  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:26.422243  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:26.422323  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:26.460959  165698 cri.go:89] found id: ""
	I0617 12:04:26.460984  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.460994  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:26.461002  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:26.461078  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:26.498324  165698 cri.go:89] found id: ""
	I0617 12:04:26.498350  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.498362  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:26.498370  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:26.498435  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:26.535299  165698 cri.go:89] found id: ""
	I0617 12:04:26.535335  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.535346  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:26.535354  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:26.535417  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:26.574623  165698 cri.go:89] found id: ""
	I0617 12:04:26.574657  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.574668  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:26.574677  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:26.574738  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:26.611576  165698 cri.go:89] found id: ""
	I0617 12:04:26.611607  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.611615  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:26.611621  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:26.611672  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:26.645664  165698 cri.go:89] found id: ""
	I0617 12:04:26.645692  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.645700  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:26.645706  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:26.645755  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:26.679442  165698 cri.go:89] found id: ""
	I0617 12:04:26.679477  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.679488  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:26.679495  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:26.679544  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:26.713512  165698 cri.go:89] found id: ""
	I0617 12:04:26.713543  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.713551  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:26.713563  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:26.713584  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:26.770823  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:26.770853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:26.784829  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:26.784858  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:26.868457  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:26.868480  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:26.868498  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:26.948522  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:26.948561  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:23.725626  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:26.224874  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:24.830639  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:26.836648  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:28.835682  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:31.335891  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:29.490891  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:29.504202  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:29.504273  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:29.544091  165698 cri.go:89] found id: ""
	I0617 12:04:29.544125  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.544137  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:29.544145  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:29.544203  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:29.581645  165698 cri.go:89] found id: ""
	I0617 12:04:29.581670  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.581679  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:29.581685  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:29.581736  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:29.621410  165698 cri.go:89] found id: ""
	I0617 12:04:29.621437  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.621447  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:29.621455  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:29.621522  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:29.659619  165698 cri.go:89] found id: ""
	I0617 12:04:29.659645  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.659654  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:29.659659  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:29.659718  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:29.698822  165698 cri.go:89] found id: ""
	I0617 12:04:29.698851  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.698859  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:29.698865  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:29.698957  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:29.741648  165698 cri.go:89] found id: ""
	I0617 12:04:29.741673  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.741680  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:29.741686  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:29.741752  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:29.777908  165698 cri.go:89] found id: ""
	I0617 12:04:29.777933  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.777941  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:29.777947  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:29.778013  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:29.812290  165698 cri.go:89] found id: ""
	I0617 12:04:29.812318  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.812328  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:29.812340  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:29.812357  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:29.857527  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:29.857552  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:29.916734  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:29.916776  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:29.930988  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:29.931013  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:30.006055  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:30.006080  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:30.006098  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:32.586549  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:32.600139  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:32.600262  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:32.641527  165698 cri.go:89] found id: ""
	I0617 12:04:32.641554  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.641570  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:32.641579  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:32.641635  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:32.687945  165698 cri.go:89] found id: ""
	I0617 12:04:32.687972  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.687981  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:32.687996  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:32.688068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:32.725586  165698 cri.go:89] found id: ""
	I0617 12:04:32.725618  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.725629  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:32.725639  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:32.725696  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:32.764042  165698 cri.go:89] found id: ""
	I0617 12:04:32.764090  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.764107  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:32.764115  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:32.764183  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:32.800132  165698 cri.go:89] found id: ""
	I0617 12:04:32.800167  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.800180  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:32.800189  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:32.800256  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:32.840313  165698 cri.go:89] found id: ""
	I0617 12:04:32.840348  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.840359  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:32.840367  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:32.840434  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:32.878041  165698 cri.go:89] found id: ""
	I0617 12:04:32.878067  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.878076  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:32.878082  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:32.878134  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:32.913904  165698 cri.go:89] found id: ""
	I0617 12:04:32.913939  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.913950  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:32.913961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:32.913974  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:04:28.725534  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:31.224885  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:29.330706  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:31.331989  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:33.337062  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:35.834807  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	W0617 12:04:32.987900  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:32.987929  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:32.987947  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:33.060919  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:33.060961  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:33.102602  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:33.102629  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:33.154112  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:33.154161  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:35.669336  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:35.682819  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:35.682907  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:35.717542  165698 cri.go:89] found id: ""
	I0617 12:04:35.717571  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.717579  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:35.717586  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:35.717646  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:35.754454  165698 cri.go:89] found id: ""
	I0617 12:04:35.754483  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.754495  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:35.754503  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:35.754566  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:35.791198  165698 cri.go:89] found id: ""
	I0617 12:04:35.791227  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.791237  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:35.791246  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:35.791309  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:35.826858  165698 cri.go:89] found id: ""
	I0617 12:04:35.826892  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.826903  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:35.826911  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:35.826974  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:35.866817  165698 cri.go:89] found id: ""
	I0617 12:04:35.866845  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.866853  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:35.866861  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:35.866909  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:35.918340  165698 cri.go:89] found id: ""
	I0617 12:04:35.918377  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.918388  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:35.918397  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:35.918466  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:35.960734  165698 cri.go:89] found id: ""
	I0617 12:04:35.960764  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.960774  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:35.960779  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:35.960841  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:36.002392  165698 cri.go:89] found id: ""
	I0617 12:04:36.002426  165698 logs.go:276] 0 containers: []
	W0617 12:04:36.002437  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:36.002449  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:36.002465  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:36.055130  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:36.055163  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:36.069181  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:36.069209  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:36.146078  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:36.146105  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:36.146120  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:36.223763  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:36.223797  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:33.723759  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:35.725954  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:38.225200  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:33.833990  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:36.332152  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:38.332570  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:37.836765  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:40.334594  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:42.336958  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:38.767375  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:38.781301  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:38.781357  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:38.821364  165698 cri.go:89] found id: ""
	I0617 12:04:38.821390  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.821400  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:38.821409  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:38.821472  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:38.860727  165698 cri.go:89] found id: ""
	I0617 12:04:38.860784  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.860796  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:38.860803  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:38.860868  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:38.902932  165698 cri.go:89] found id: ""
	I0617 12:04:38.902968  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.902992  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:38.902999  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:38.903088  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:38.940531  165698 cri.go:89] found id: ""
	I0617 12:04:38.940564  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.940576  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:38.940584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:38.940649  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:38.975751  165698 cri.go:89] found id: ""
	I0617 12:04:38.975792  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.975827  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:38.975835  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:38.975908  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:39.011156  165698 cri.go:89] found id: ""
	I0617 12:04:39.011196  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.011206  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:39.011213  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:39.011269  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:39.049266  165698 cri.go:89] found id: ""
	I0617 12:04:39.049301  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.049312  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:39.049320  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:39.049373  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:39.089392  165698 cri.go:89] found id: ""
	I0617 12:04:39.089425  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.089434  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:39.089444  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:39.089459  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:39.166585  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:39.166607  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:39.166619  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:39.241910  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:39.241950  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:39.287751  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:39.287782  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:39.342226  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:39.342259  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:41.857327  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:41.871379  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:41.871446  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:41.907435  165698 cri.go:89] found id: ""
	I0617 12:04:41.907472  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.907483  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:41.907492  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:41.907542  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:41.941684  165698 cri.go:89] found id: ""
	I0617 12:04:41.941725  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.941737  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:41.941745  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:41.941819  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:41.977359  165698 cri.go:89] found id: ""
	I0617 12:04:41.977395  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.977407  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:41.977415  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:41.977478  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:42.015689  165698 cri.go:89] found id: ""
	I0617 12:04:42.015723  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.015734  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:42.015742  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:42.015803  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:42.050600  165698 cri.go:89] found id: ""
	I0617 12:04:42.050626  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.050637  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:42.050645  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:42.050707  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:42.088174  165698 cri.go:89] found id: ""
	I0617 12:04:42.088201  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.088212  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:42.088221  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:42.088290  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:42.127335  165698 cri.go:89] found id: ""
	I0617 12:04:42.127364  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.127375  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:42.127384  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:42.127443  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:42.163435  165698 cri.go:89] found id: ""
	I0617 12:04:42.163481  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.163492  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:42.163505  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:42.163527  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:42.233233  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:42.233262  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:42.233280  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:42.311695  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:42.311741  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:42.378134  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:42.378163  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:42.439614  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:42.439647  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:40.726373  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:43.225144  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:40.336291  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:42.831220  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:44.835811  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:47.335772  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:44.953738  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:44.967822  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:44.967884  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:45.004583  165698 cri.go:89] found id: ""
	I0617 12:04:45.004687  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.004732  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:45.004741  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:45.004797  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:45.038912  165698 cri.go:89] found id: ""
	I0617 12:04:45.038939  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.038949  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:45.038957  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:45.039026  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:45.073594  165698 cri.go:89] found id: ""
	I0617 12:04:45.073620  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.073628  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:45.073634  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:45.073684  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:45.108225  165698 cri.go:89] found id: ""
	I0617 12:04:45.108253  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.108261  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:45.108267  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:45.108317  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:45.139522  165698 cri.go:89] found id: ""
	I0617 12:04:45.139545  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.139553  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:45.139559  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:45.139609  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:45.173705  165698 cri.go:89] found id: ""
	I0617 12:04:45.173735  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.173745  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:45.173752  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:45.173813  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:45.206448  165698 cri.go:89] found id: ""
	I0617 12:04:45.206477  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.206486  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:45.206493  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:45.206551  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:45.242925  165698 cri.go:89] found id: ""
	I0617 12:04:45.242952  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.242962  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:45.242981  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:45.242998  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:45.294669  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:45.294700  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:45.307642  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:45.307670  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:45.381764  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:45.381788  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:45.381805  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:45.469022  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:45.469056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:45.724236  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:48.225656  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:45.332888  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:47.832326  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:49.337260  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:51.338718  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:48.014169  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:48.029895  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:48.029984  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:48.086421  165698 cri.go:89] found id: ""
	I0617 12:04:48.086456  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.086468  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:48.086477  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:48.086554  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:48.135673  165698 cri.go:89] found id: ""
	I0617 12:04:48.135705  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.135713  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:48.135733  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:48.135808  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:48.184330  165698 cri.go:89] found id: ""
	I0617 12:04:48.184353  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.184362  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:48.184368  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:48.184418  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:48.221064  165698 cri.go:89] found id: ""
	I0617 12:04:48.221095  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.221103  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:48.221112  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:48.221175  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:48.264464  165698 cri.go:89] found id: ""
	I0617 12:04:48.264495  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.264502  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:48.264508  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:48.264561  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:48.302144  165698 cri.go:89] found id: ""
	I0617 12:04:48.302180  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.302191  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:48.302199  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:48.302263  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:48.345431  165698 cri.go:89] found id: ""
	I0617 12:04:48.345458  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.345465  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:48.345472  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:48.345539  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:48.383390  165698 cri.go:89] found id: ""
	I0617 12:04:48.383423  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.383434  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:48.383447  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:48.383478  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:48.422328  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:48.422356  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:48.473698  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:48.473735  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:48.488399  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:48.488429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:48.566851  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:48.566871  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:48.566884  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:51.149626  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:51.162855  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:51.162926  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:51.199056  165698 cri.go:89] found id: ""
	I0617 12:04:51.199091  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.199102  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:51.199109  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:51.199172  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:51.238773  165698 cri.go:89] found id: ""
	I0617 12:04:51.238810  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.238821  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:51.238827  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:51.238883  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:51.279049  165698 cri.go:89] found id: ""
	I0617 12:04:51.279079  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.279092  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:51.279100  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:51.279166  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:51.324923  165698 cri.go:89] found id: ""
	I0617 12:04:51.324957  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.324969  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:51.324976  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:51.325028  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:51.363019  165698 cri.go:89] found id: ""
	I0617 12:04:51.363055  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.363068  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:51.363077  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:51.363142  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:51.399620  165698 cri.go:89] found id: ""
	I0617 12:04:51.399652  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.399661  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:51.399675  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:51.399758  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:51.434789  165698 cri.go:89] found id: ""
	I0617 12:04:51.434824  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.434836  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:51.434844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:51.434910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:51.470113  165698 cri.go:89] found id: ""
	I0617 12:04:51.470140  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.470149  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:51.470160  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:51.470176  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:51.526138  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:51.526173  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:51.539451  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:51.539491  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:51.613418  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:51.613437  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:51.613450  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:51.691971  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:51.692010  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:50.724405  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:52.725426  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:50.332363  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:52.332932  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:53.834955  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:56.334584  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:54.234514  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:54.249636  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:54.249724  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:54.283252  165698 cri.go:89] found id: ""
	I0617 12:04:54.283287  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.283300  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:54.283307  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:54.283367  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:54.319153  165698 cri.go:89] found id: ""
	I0617 12:04:54.319207  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.319218  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:54.319226  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:54.319290  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:54.361450  165698 cri.go:89] found id: ""
	I0617 12:04:54.361480  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.361491  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:54.361498  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:54.361562  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:54.397806  165698 cri.go:89] found id: ""
	I0617 12:04:54.397834  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.397843  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:54.397849  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:54.397899  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:54.447119  165698 cri.go:89] found id: ""
	I0617 12:04:54.447147  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.447155  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:54.447161  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:54.447211  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:54.489717  165698 cri.go:89] found id: ""
	I0617 12:04:54.489751  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.489760  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:54.489766  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:54.489830  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:54.532840  165698 cri.go:89] found id: ""
	I0617 12:04:54.532943  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.532975  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:54.532989  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:54.533100  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:54.568227  165698 cri.go:89] found id: ""
	I0617 12:04:54.568369  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.568391  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:54.568403  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:54.568420  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:54.583140  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:54.583174  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:54.661258  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:54.661281  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:54.661296  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:54.750472  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:54.750511  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:54.797438  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:54.797467  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:57.349800  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:57.364820  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:57.364879  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:57.405065  165698 cri.go:89] found id: ""
	I0617 12:04:57.405093  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.405101  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:57.405106  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:57.405153  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:57.445707  165698 cri.go:89] found id: ""
	I0617 12:04:57.445741  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.445752  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:57.445760  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:57.445829  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:57.486911  165698 cri.go:89] found id: ""
	I0617 12:04:57.486940  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.486948  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:57.486955  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:57.487014  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:57.521218  165698 cri.go:89] found id: ""
	I0617 12:04:57.521254  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.521266  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:57.521274  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:57.521342  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:57.555762  165698 cri.go:89] found id: ""
	I0617 12:04:57.555794  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.555803  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:57.555808  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:57.555863  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:57.591914  165698 cri.go:89] found id: ""
	I0617 12:04:57.591945  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.591956  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:57.591971  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:57.592037  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:57.626435  165698 cri.go:89] found id: ""
	I0617 12:04:57.626463  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.626471  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:57.626477  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:57.626527  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:57.665088  165698 cri.go:89] found id: ""
	I0617 12:04:57.665118  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.665126  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:57.665137  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:57.665152  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:57.716284  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:57.716316  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:57.730179  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:57.730204  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:57.808904  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:57.808933  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:57.808954  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:57.894499  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:57.894530  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:55.224507  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:57.224583  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:54.831112  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:56.832477  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:58.334640  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:00.335137  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:00.435957  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:00.450812  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:00.450890  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:00.491404  165698 cri.go:89] found id: ""
	I0617 12:05:00.491432  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.491440  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:00.491446  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:00.491523  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:00.526711  165698 cri.go:89] found id: ""
	I0617 12:05:00.526739  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.526747  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:00.526753  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:00.526817  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:00.562202  165698 cri.go:89] found id: ""
	I0617 12:05:00.562236  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.562246  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:00.562255  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:00.562323  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:00.602754  165698 cri.go:89] found id: ""
	I0617 12:05:00.602790  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.602802  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:00.602811  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:00.602877  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:00.645666  165698 cri.go:89] found id: ""
	I0617 12:05:00.645703  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.645715  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:00.645723  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:00.645788  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:00.684649  165698 cri.go:89] found id: ""
	I0617 12:05:00.684685  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.684694  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:00.684701  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:00.684784  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:00.727139  165698 cri.go:89] found id: ""
	I0617 12:05:00.727160  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.727167  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:00.727173  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:00.727238  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:00.764401  165698 cri.go:89] found id: ""
	I0617 12:05:00.764433  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.764444  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:00.764455  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:00.764474  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:00.777301  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:00.777322  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:00.849752  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:00.849778  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:00.849795  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:00.930220  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:00.930266  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:00.970076  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:00.970116  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:59.226429  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:01.725079  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:59.337081  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:01.834932  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:02.834132  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:05.334066  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:07.335366  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:03.526070  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:03.541150  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:03.541229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:03.584416  165698 cri.go:89] found id: ""
	I0617 12:05:03.584451  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.584463  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:03.584472  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:03.584535  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:03.623509  165698 cri.go:89] found id: ""
	I0617 12:05:03.623543  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.623552  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:03.623558  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:03.623611  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:03.661729  165698 cri.go:89] found id: ""
	I0617 12:05:03.661765  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.661778  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:03.661787  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:03.661852  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:03.702952  165698 cri.go:89] found id: ""
	I0617 12:05:03.702985  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.703008  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:03.703033  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:03.703100  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:03.746534  165698 cri.go:89] found id: ""
	I0617 12:05:03.746570  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.746578  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:03.746584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:03.746648  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:03.784472  165698 cri.go:89] found id: ""
	I0617 12:05:03.784506  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.784515  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:03.784522  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:03.784580  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:03.821033  165698 cri.go:89] found id: ""
	I0617 12:05:03.821066  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.821077  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:03.821085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:03.821146  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:03.859438  165698 cri.go:89] found id: ""
	I0617 12:05:03.859474  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.859487  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:03.859497  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:03.859513  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:03.940723  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:03.940770  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:03.986267  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:03.986303  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:04.037999  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:04.038039  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:04.051382  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:04.051415  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:04.121593  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:06.622475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:06.636761  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:06.636842  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:06.673954  165698 cri.go:89] found id: ""
	I0617 12:05:06.673995  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.674007  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:06.674015  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:06.674084  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:06.708006  165698 cri.go:89] found id: ""
	I0617 12:05:06.708037  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.708047  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:06.708055  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:06.708124  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:06.743819  165698 cri.go:89] found id: ""
	I0617 12:05:06.743852  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.743864  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:06.743872  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:06.743934  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:06.781429  165698 cri.go:89] found id: ""
	I0617 12:05:06.781457  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.781465  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:06.781473  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:06.781540  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:06.818404  165698 cri.go:89] found id: ""
	I0617 12:05:06.818435  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.818447  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:06.818456  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:06.818516  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:06.857880  165698 cri.go:89] found id: ""
	I0617 12:05:06.857913  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.857924  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:06.857933  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:06.857993  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:06.893010  165698 cri.go:89] found id: ""
	I0617 12:05:06.893050  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.893059  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:06.893065  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:06.893118  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:06.926302  165698 cri.go:89] found id: ""
	I0617 12:05:06.926336  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.926347  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:06.926360  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:06.926378  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:06.997173  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:06.997197  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:06.997215  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:07.082843  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:07.082885  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:07.122542  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:07.122572  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:07.177033  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:07.177070  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:03.725338  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:06.225466  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:04.331639  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:06.331988  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:08.332139  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:09.835119  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:12.333346  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:09.693217  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:09.707043  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:09.707110  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:09.742892  165698 cri.go:89] found id: ""
	I0617 12:05:09.742918  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.742927  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:09.742933  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:09.742982  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:09.776938  165698 cri.go:89] found id: ""
	I0617 12:05:09.776969  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.776976  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:09.776982  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:09.777030  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:09.813613  165698 cri.go:89] found id: ""
	I0617 12:05:09.813643  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.813651  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:09.813658  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:09.813705  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:09.855483  165698 cri.go:89] found id: ""
	I0617 12:05:09.855516  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.855525  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:09.855532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:09.855596  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:09.890808  165698 cri.go:89] found id: ""
	I0617 12:05:09.890844  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.890854  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:09.890862  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:09.890930  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:09.927656  165698 cri.go:89] found id: ""
	I0617 12:05:09.927684  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.927693  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:09.927703  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:09.927758  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:09.968130  165698 cri.go:89] found id: ""
	I0617 12:05:09.968163  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.968174  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:09.968183  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:09.968239  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:10.010197  165698 cri.go:89] found id: ""
	I0617 12:05:10.010220  165698 logs.go:276] 0 containers: []
	W0617 12:05:10.010228  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:10.010239  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:10.010252  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:10.063999  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:10.064040  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:10.078837  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:10.078873  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:10.155932  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:10.155954  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:10.155967  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:10.232859  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:10.232901  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:12.772943  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:12.787936  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:12.788024  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:12.828457  165698 cri.go:89] found id: ""
	I0617 12:05:12.828483  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.828491  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:12.828498  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:12.828562  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:12.862265  165698 cri.go:89] found id: ""
	I0617 12:05:12.862296  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.862306  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:12.862313  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:12.862372  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:12.899673  165698 cri.go:89] found id: ""
	I0617 12:05:12.899698  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.899706  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:12.899712  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:12.899759  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:12.943132  165698 cri.go:89] found id: ""
	I0617 12:05:12.943161  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.943169  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:12.943175  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:12.943227  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:08.724369  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:10.725166  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:13.224799  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:10.333769  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:12.832493  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:14.336437  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:16.835155  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:12.985651  165698 cri.go:89] found id: ""
	I0617 12:05:12.985677  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.985685  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:12.985691  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:12.985747  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:13.021484  165698 cri.go:89] found id: ""
	I0617 12:05:13.021508  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.021516  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:13.021522  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:13.021569  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:13.060658  165698 cri.go:89] found id: ""
	I0617 12:05:13.060689  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.060705  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:13.060713  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:13.060782  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:13.106008  165698 cri.go:89] found id: ""
	I0617 12:05:13.106041  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.106052  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:13.106066  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:13.106083  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:13.160199  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:13.160231  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:13.173767  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:13.173804  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:13.245358  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:13.245383  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:13.245399  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:13.323046  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:13.323085  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:15.872024  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:15.885550  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:15.885624  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:15.920303  165698 cri.go:89] found id: ""
	I0617 12:05:15.920332  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.920344  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:15.920358  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:15.920423  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:15.955132  165698 cri.go:89] found id: ""
	I0617 12:05:15.955158  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.955166  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:15.955172  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:15.955220  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:15.992995  165698 cri.go:89] found id: ""
	I0617 12:05:15.993034  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.993053  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:15.993060  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:15.993127  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:16.032603  165698 cri.go:89] found id: ""
	I0617 12:05:16.032638  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.032650  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:16.032658  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:16.032716  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:16.071770  165698 cri.go:89] found id: ""
	I0617 12:05:16.071804  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.071816  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:16.071824  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:16.071899  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:16.106172  165698 cri.go:89] found id: ""
	I0617 12:05:16.106206  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.106218  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:16.106226  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:16.106292  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:16.139406  165698 cri.go:89] found id: ""
	I0617 12:05:16.139436  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.139443  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:16.139449  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:16.139517  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:16.174513  165698 cri.go:89] found id: ""
	I0617 12:05:16.174554  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.174565  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:16.174580  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:16.174597  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:16.240912  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:16.240940  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:16.240958  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:16.323853  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:16.323891  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:16.372632  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:16.372659  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:16.428367  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:16.428406  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:15.224918  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:17.725226  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:15.332512  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:17.833710  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:19.334324  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:21.334654  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:18.943551  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:18.957394  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:18.957490  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:18.991967  165698 cri.go:89] found id: ""
	I0617 12:05:18.992006  165698 logs.go:276] 0 containers: []
	W0617 12:05:18.992017  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:18.992027  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:18.992092  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:19.025732  165698 cri.go:89] found id: ""
	I0617 12:05:19.025763  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.025775  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:19.025783  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:19.025856  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:19.061786  165698 cri.go:89] found id: ""
	I0617 12:05:19.061820  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.061830  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:19.061838  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:19.061906  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:19.098819  165698 cri.go:89] found id: ""
	I0617 12:05:19.098856  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.098868  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:19.098876  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:19.098947  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:19.139840  165698 cri.go:89] found id: ""
	I0617 12:05:19.139877  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.139886  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:19.139894  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:19.139965  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:19.176546  165698 cri.go:89] found id: ""
	I0617 12:05:19.176578  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.176590  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:19.176598  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:19.176671  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:19.209948  165698 cri.go:89] found id: ""
	I0617 12:05:19.209985  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.209997  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:19.210005  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:19.210087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:19.246751  165698 cri.go:89] found id: ""
	I0617 12:05:19.246788  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.246799  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:19.246812  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:19.246830  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:19.322272  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:19.322316  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:19.370147  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:19.370187  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:19.422699  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:19.422749  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:19.437255  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:19.437284  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:19.510077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:22.010840  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:22.024791  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:22.024879  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:22.060618  165698 cri.go:89] found id: ""
	I0617 12:05:22.060658  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.060667  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:22.060674  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:22.060742  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:22.100228  165698 cri.go:89] found id: ""
	I0617 12:05:22.100259  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.100268  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:22.100274  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:22.100343  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:22.135629  165698 cri.go:89] found id: ""
	I0617 12:05:22.135657  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.135665  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:22.135671  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:22.135730  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:22.186027  165698 cri.go:89] found id: ""
	I0617 12:05:22.186064  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.186076  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:22.186085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:22.186148  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:22.220991  165698 cri.go:89] found id: ""
	I0617 12:05:22.221019  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.221029  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:22.221037  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:22.221104  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:22.266306  165698 cri.go:89] found id: ""
	I0617 12:05:22.266337  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.266348  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:22.266357  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:22.266414  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:22.303070  165698 cri.go:89] found id: ""
	I0617 12:05:22.303104  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.303116  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:22.303124  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:22.303190  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:22.339792  165698 cri.go:89] found id: ""
	I0617 12:05:22.339819  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.339829  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:22.339840  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:22.339856  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:22.422360  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:22.422397  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:22.465744  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:22.465777  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:22.516199  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:22.516232  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:22.529961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:22.529983  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:22.601519  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:20.225369  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:22.226699  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:19.834562  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:21.837426  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:23.336540  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:25.835706  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:25.102655  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:25.116893  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:25.116959  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:25.156370  165698 cri.go:89] found id: ""
	I0617 12:05:25.156396  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.156404  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:25.156410  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:25.156468  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:25.193123  165698 cri.go:89] found id: ""
	I0617 12:05:25.193199  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.193221  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:25.193234  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:25.193301  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:25.232182  165698 cri.go:89] found id: ""
	I0617 12:05:25.232209  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.232219  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:25.232227  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:25.232285  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:25.266599  165698 cri.go:89] found id: ""
	I0617 12:05:25.266630  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.266639  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:25.266645  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:25.266701  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:25.308732  165698 cri.go:89] found id: ""
	I0617 12:05:25.308762  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.308770  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:25.308776  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:25.308836  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:25.348817  165698 cri.go:89] found id: ""
	I0617 12:05:25.348858  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.348871  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:25.348879  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:25.348946  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:25.389343  165698 cri.go:89] found id: ""
	I0617 12:05:25.389375  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.389387  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:25.389393  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:25.389452  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:25.427014  165698 cri.go:89] found id: ""
	I0617 12:05:25.427043  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.427055  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:25.427067  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:25.427083  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:25.441361  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:25.441390  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:25.518967  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:25.518993  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:25.519006  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:25.601411  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:25.601450  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:25.651636  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:25.651674  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:24.725515  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:27.223821  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:24.333548  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:26.832428  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:27.836661  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:30.334313  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:32.336489  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:28.202148  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:28.215710  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:28.215792  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:28.254961  165698 cri.go:89] found id: ""
	I0617 12:05:28.254986  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.255000  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:28.255007  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:28.255061  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:28.292574  165698 cri.go:89] found id: ""
	I0617 12:05:28.292606  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.292614  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:28.292620  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:28.292683  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:28.329036  165698 cri.go:89] found id: ""
	I0617 12:05:28.329067  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.329077  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:28.329085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:28.329152  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:28.366171  165698 cri.go:89] found id: ""
	I0617 12:05:28.366197  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.366206  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:28.366212  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:28.366273  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:28.401380  165698 cri.go:89] found id: ""
	I0617 12:05:28.401407  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.401417  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:28.401424  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:28.401486  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:28.438767  165698 cri.go:89] found id: ""
	I0617 12:05:28.438798  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.438810  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:28.438817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:28.438876  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:28.472706  165698 cri.go:89] found id: ""
	I0617 12:05:28.472761  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.472772  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:28.472779  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:28.472829  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:28.509525  165698 cri.go:89] found id: ""
	I0617 12:05:28.509548  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.509556  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:28.509565  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:28.509577  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:28.606008  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:28.606059  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:28.665846  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:28.665874  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:28.721599  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:28.721627  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:28.735040  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:28.735062  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:28.811954  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:31.312554  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:31.326825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:31.326905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:31.364862  165698 cri.go:89] found id: ""
	I0617 12:05:31.364891  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.364902  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:31.364910  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:31.364976  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:31.396979  165698 cri.go:89] found id: ""
	I0617 12:05:31.397013  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.397027  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:31.397035  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:31.397098  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:31.430617  165698 cri.go:89] found id: ""
	I0617 12:05:31.430647  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.430657  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:31.430665  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:31.430728  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:31.462308  165698 cri.go:89] found id: ""
	I0617 12:05:31.462338  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.462345  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:31.462350  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:31.462399  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:31.495406  165698 cri.go:89] found id: ""
	I0617 12:05:31.495435  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.495444  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:31.495452  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:31.495553  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:31.538702  165698 cri.go:89] found id: ""
	I0617 12:05:31.538729  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.538739  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:31.538750  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:31.538813  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:31.572637  165698 cri.go:89] found id: ""
	I0617 12:05:31.572666  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.572677  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:31.572685  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:31.572745  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:31.609307  165698 cri.go:89] found id: ""
	I0617 12:05:31.609341  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.609352  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:31.609364  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:31.609380  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:31.622445  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:31.622471  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:31.699170  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:31.699191  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:31.699209  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:31.775115  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:31.775156  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:31.815836  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:31.815866  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:29.225028  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:31.727009  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:29.333400  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:31.834599  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:34.836093  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:37.335140  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:34.372097  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:34.393542  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:34.393607  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:34.437265  165698 cri.go:89] found id: ""
	I0617 12:05:34.437294  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.437305  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:34.437314  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:34.437382  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:34.474566  165698 cri.go:89] found id: ""
	I0617 12:05:34.474596  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.474609  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:34.474617  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:34.474680  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:34.510943  165698 cri.go:89] found id: ""
	I0617 12:05:34.510975  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.510986  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:34.511000  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:34.511072  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:34.548124  165698 cri.go:89] found id: ""
	I0617 12:05:34.548160  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.548172  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:34.548179  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:34.548241  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:34.582428  165698 cri.go:89] found id: ""
	I0617 12:05:34.582453  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.582460  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:34.582467  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:34.582514  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:34.616895  165698 cri.go:89] found id: ""
	I0617 12:05:34.616937  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.616950  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:34.616957  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:34.617019  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:34.656116  165698 cri.go:89] found id: ""
	I0617 12:05:34.656144  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.656155  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:34.656162  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:34.656226  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:34.695649  165698 cri.go:89] found id: ""
	I0617 12:05:34.695680  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.695692  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:34.695705  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:34.695722  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:34.747910  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:34.747956  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:34.762177  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:34.762206  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:34.840395  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:34.840423  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:34.840440  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:34.922962  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:34.923002  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:37.464659  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:37.480351  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:37.480416  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:37.521249  165698 cri.go:89] found id: ""
	I0617 12:05:37.521279  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.521286  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:37.521293  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:37.521340  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:37.561053  165698 cri.go:89] found id: ""
	I0617 12:05:37.561079  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.561087  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:37.561094  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:37.561151  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:37.599019  165698 cri.go:89] found id: ""
	I0617 12:05:37.599057  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.599066  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:37.599074  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:37.599134  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:37.638276  165698 cri.go:89] found id: ""
	I0617 12:05:37.638304  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.638315  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:37.638323  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:37.638389  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:37.677819  165698 cri.go:89] found id: ""
	I0617 12:05:37.677845  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.677853  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:37.677859  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:37.677910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:37.715850  165698 cri.go:89] found id: ""
	I0617 12:05:37.715877  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.715888  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:37.715897  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:37.715962  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:37.755533  165698 cri.go:89] found id: ""
	I0617 12:05:37.755563  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.755570  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:37.755576  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:37.755636  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:37.791826  165698 cri.go:89] found id: ""
	I0617 12:05:37.791850  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.791859  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:37.791872  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:37.791888  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:37.844824  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:37.844853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:37.860933  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:37.860963  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:37.926497  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:37.926519  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:37.926535  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:34.224078  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:36.224464  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:38.224753  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:34.333888  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:36.832374  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:39.336299  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:41.834494  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:38.003814  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:38.003853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:40.546386  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:40.560818  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:40.560896  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:40.596737  165698 cri.go:89] found id: ""
	I0617 12:05:40.596777  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.596784  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:40.596791  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:40.596844  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:40.631518  165698 cri.go:89] found id: ""
	I0617 12:05:40.631556  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.631570  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:40.631611  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:40.631683  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:40.674962  165698 cri.go:89] found id: ""
	I0617 12:05:40.674997  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.675006  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:40.675012  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:40.675064  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:40.716181  165698 cri.go:89] found id: ""
	I0617 12:05:40.716210  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.716218  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:40.716226  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:40.716286  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:40.756312  165698 cri.go:89] found id: ""
	I0617 12:05:40.756339  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.756348  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:40.756353  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:40.756406  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:40.791678  165698 cri.go:89] found id: ""
	I0617 12:05:40.791733  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.791750  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:40.791759  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:40.791830  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:40.830717  165698 cri.go:89] found id: ""
	I0617 12:05:40.830754  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.830766  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:40.830774  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:40.830854  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:40.868139  165698 cri.go:89] found id: ""
	I0617 12:05:40.868169  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.868178  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:40.868198  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:40.868224  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:40.920319  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:40.920353  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:40.934948  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:40.934974  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:41.005349  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:41.005371  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:41.005388  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:41.086783  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:41.086842  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:40.724767  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:43.223836  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:38.834031  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:41.331190  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:43.332593  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:44.334114  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:46.334595  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:43.625515  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:43.638942  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:43.639019  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:43.673703  165698 cri.go:89] found id: ""
	I0617 12:05:43.673735  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.673747  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:43.673756  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:43.673822  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:43.709417  165698 cri.go:89] found id: ""
	I0617 12:05:43.709449  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.709460  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:43.709468  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:43.709529  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:43.742335  165698 cri.go:89] found id: ""
	I0617 12:05:43.742368  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.742379  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:43.742389  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:43.742449  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:43.779112  165698 cri.go:89] found id: ""
	I0617 12:05:43.779141  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.779150  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:43.779155  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:43.779219  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:43.813362  165698 cri.go:89] found id: ""
	I0617 12:05:43.813397  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.813406  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:43.813414  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:43.813464  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:43.850456  165698 cri.go:89] found id: ""
	I0617 12:05:43.850484  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.850493  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:43.850499  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:43.850547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:43.884527  165698 cri.go:89] found id: ""
	I0617 12:05:43.884555  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.884564  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:43.884571  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:43.884632  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:43.921440  165698 cri.go:89] found id: ""
	I0617 12:05:43.921476  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.921488  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:43.921501  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:43.921517  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:43.973687  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:43.973727  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:43.988114  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:43.988143  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:44.055084  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:44.055119  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:44.055138  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:44.134628  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:44.134665  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:46.677852  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:46.690688  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:46.690747  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:46.724055  165698 cri.go:89] found id: ""
	I0617 12:05:46.724090  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.724101  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:46.724110  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:46.724171  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:46.759119  165698 cri.go:89] found id: ""
	I0617 12:05:46.759150  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.759161  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:46.759169  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:46.759227  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:46.796392  165698 cri.go:89] found id: ""
	I0617 12:05:46.796424  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.796435  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:46.796442  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:46.796504  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:46.831727  165698 cri.go:89] found id: ""
	I0617 12:05:46.831761  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.831770  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:46.831777  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:46.831845  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:46.866662  165698 cri.go:89] found id: ""
	I0617 12:05:46.866693  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.866702  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:46.866708  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:46.866757  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:46.905045  165698 cri.go:89] found id: ""
	I0617 12:05:46.905070  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.905078  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:46.905084  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:46.905130  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:46.940879  165698 cri.go:89] found id: ""
	I0617 12:05:46.940907  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.940915  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:46.940926  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:46.940974  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:46.977247  165698 cri.go:89] found id: ""
	I0617 12:05:46.977290  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.977301  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:46.977314  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:46.977331  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:47.046094  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:47.046116  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:47.046133  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:47.122994  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:47.123038  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:47.166273  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:47.166313  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:47.221392  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:47.221429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:45.228807  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:47.723584  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:45.834805  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:48.333121  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:48.335758  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:50.833989  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:49.739113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:49.752880  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:49.753004  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:49.791177  165698 cri.go:89] found id: ""
	I0617 12:05:49.791218  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.791242  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:49.791251  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:49.791322  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:49.831602  165698 cri.go:89] found id: ""
	I0617 12:05:49.831633  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.831644  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:49.831652  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:49.831719  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:49.870962  165698 cri.go:89] found id: ""
	I0617 12:05:49.870998  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.871011  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:49.871019  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:49.871092  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:49.917197  165698 cri.go:89] found id: ""
	I0617 12:05:49.917232  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.917243  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:49.917252  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:49.917320  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:49.952997  165698 cri.go:89] found id: ""
	I0617 12:05:49.953034  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.953047  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:49.953056  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:49.953114  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:50.001925  165698 cri.go:89] found id: ""
	I0617 12:05:50.001965  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.001977  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:50.001986  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:50.002059  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:50.043374  165698 cri.go:89] found id: ""
	I0617 12:05:50.043403  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.043412  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:50.043419  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:50.043496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:50.082974  165698 cri.go:89] found id: ""
	I0617 12:05:50.083009  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.083020  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:50.083029  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:50.083043  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:50.134116  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:50.134159  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:50.148478  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:50.148511  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:50.227254  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:50.227276  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:50.227288  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:50.305920  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:50.305960  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:52.848811  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:52.862612  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:52.862669  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:52.896379  165698 cri.go:89] found id: ""
	I0617 12:05:52.896410  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.896421  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:52.896429  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:52.896488  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:52.933387  165698 cri.go:89] found id: ""
	I0617 12:05:52.933422  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.933432  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:52.933439  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:52.933501  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:52.971055  165698 cri.go:89] found id: ""
	I0617 12:05:52.971091  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.971102  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:52.971110  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:52.971168  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:49.724816  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:52.224660  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:50.334092  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:52.831686  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:52.835473  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:55.334017  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:57.334957  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:53.003815  165698 cri.go:89] found id: ""
	I0617 12:05:53.003846  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.003857  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:53.003864  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:53.003927  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:53.039133  165698 cri.go:89] found id: ""
	I0617 12:05:53.039161  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.039169  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:53.039175  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:53.039229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:53.077703  165698 cri.go:89] found id: ""
	I0617 12:05:53.077756  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.077773  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:53.077780  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:53.077831  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:53.119187  165698 cri.go:89] found id: ""
	I0617 12:05:53.119216  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.119223  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:53.119230  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:53.119287  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:53.154423  165698 cri.go:89] found id: ""
	I0617 12:05:53.154457  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.154467  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:53.154480  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:53.154496  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:53.202745  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:53.202778  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:53.216510  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:53.216537  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:53.295687  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:53.295712  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:53.295732  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:53.375064  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:53.375095  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:55.915113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:55.929155  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:55.929239  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:55.964589  165698 cri.go:89] found id: ""
	I0617 12:05:55.964625  165698 logs.go:276] 0 containers: []
	W0617 12:05:55.964634  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:55.964640  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:55.964702  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:56.003659  165698 cri.go:89] found id: ""
	I0617 12:05:56.003691  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.003701  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:56.003709  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:56.003778  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:56.039674  165698 cri.go:89] found id: ""
	I0617 12:05:56.039707  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.039717  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:56.039724  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:56.039786  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:56.077695  165698 cri.go:89] found id: ""
	I0617 12:05:56.077736  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.077748  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:56.077756  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:56.077826  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:56.116397  165698 cri.go:89] found id: ""
	I0617 12:05:56.116430  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.116442  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:56.116451  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:56.116512  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:56.152395  165698 cri.go:89] found id: ""
	I0617 12:05:56.152433  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.152445  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:56.152454  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:56.152513  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:56.189740  165698 cri.go:89] found id: ""
	I0617 12:05:56.189776  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.189788  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:56.189796  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:56.189866  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:56.228017  165698 cri.go:89] found id: ""
	I0617 12:05:56.228047  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.228055  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:56.228063  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:56.228076  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:56.279032  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:56.279079  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:56.294369  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:56.294394  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:56.369507  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:56.369535  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:56.369551  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:56.454797  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:56.454833  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:54.725303  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:56.726247  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:56.726280  165060 pod_ready.go:81] duration metric: took 4m0.008373114s for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	E0617 12:05:56.726291  165060 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0617 12:05:56.726298  165060 pod_ready.go:38] duration metric: took 4m3.608691328s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:05:56.726315  165060 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:05:56.726352  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:56.726411  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:56.784765  165060 cri.go:89] found id: "5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:05:56.784792  165060 cri.go:89] found id: ""
	I0617 12:05:56.784803  165060 logs.go:276] 1 containers: [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3]
	I0617 12:05:56.784865  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.791125  165060 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:56.791189  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:56.830691  165060 cri.go:89] found id: "fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:05:56.830715  165060 cri.go:89] found id: ""
	I0617 12:05:56.830725  165060 logs.go:276] 1 containers: [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9]
	I0617 12:05:56.830785  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.836214  165060 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:56.836282  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:56.875812  165060 cri.go:89] found id: "c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:05:56.875830  165060 cri.go:89] found id: ""
	I0617 12:05:56.875837  165060 logs.go:276] 1 containers: [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7]
	I0617 12:05:56.875891  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.880190  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:56.880247  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:56.925155  165060 cri.go:89] found id: "157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:05:56.925178  165060 cri.go:89] found id: ""
	I0617 12:05:56.925186  165060 logs.go:276] 1 containers: [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d]
	I0617 12:05:56.925231  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.930317  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:56.930384  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:56.972479  165060 cri.go:89] found id: "c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:05:56.972503  165060 cri.go:89] found id: ""
	I0617 12:05:56.972512  165060 logs.go:276] 1 containers: [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d]
	I0617 12:05:56.972559  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.977635  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:56.977696  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:57.012791  165060 cri.go:89] found id: "2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:05:57.012816  165060 cri.go:89] found id: ""
	I0617 12:05:57.012826  165060 logs.go:276] 1 containers: [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079]
	I0617 12:05:57.012882  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:57.016856  165060 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:57.016909  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:57.052111  165060 cri.go:89] found id: ""
	I0617 12:05:57.052146  165060 logs.go:276] 0 containers: []
	W0617 12:05:57.052156  165060 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:57.052163  165060 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:05:57.052211  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:05:57.094600  165060 cri.go:89] found id: "02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:05:57.094619  165060 cri.go:89] found id: "7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:05:57.094622  165060 cri.go:89] found id: ""
	I0617 12:05:57.094630  165060 logs.go:276] 2 containers: [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36]
	I0617 12:05:57.094700  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:57.099250  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:57.104252  165060 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:57.104281  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:57.162000  165060 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:57.162027  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:05:57.285448  165060 logs.go:123] Gathering logs for etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] ...
	I0617 12:05:57.285490  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:05:57.340781  165060 logs.go:123] Gathering logs for coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] ...
	I0617 12:05:57.340820  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:05:57.383507  165060 logs.go:123] Gathering logs for kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] ...
	I0617 12:05:57.383540  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:05:57.428747  165060 logs.go:123] Gathering logs for kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] ...
	I0617 12:05:57.428792  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:05:57.468739  165060 logs.go:123] Gathering logs for kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] ...
	I0617 12:05:57.468770  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:05:57.531317  165060 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:57.531355  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:58.063787  165060 logs.go:123] Gathering logs for container status ...
	I0617 12:05:58.063838  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:58.129384  165060 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:58.129416  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:58.144078  165060 logs.go:123] Gathering logs for kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] ...
	I0617 12:05:58.144152  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:05:58.189028  165060 logs.go:123] Gathering logs for storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] ...
	I0617 12:05:58.189068  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:05:58.227144  165060 logs.go:123] Gathering logs for storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] ...
	I0617 12:05:58.227178  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:05:54.838580  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:57.333884  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:59.836198  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:01.837155  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:58.995221  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:59.008481  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:59.008555  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:59.043854  165698 cri.go:89] found id: ""
	I0617 12:05:59.043887  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.043914  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:59.043935  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:59.044003  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:59.081488  165698 cri.go:89] found id: ""
	I0617 12:05:59.081522  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.081530  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:59.081537  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:59.081596  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:59.118193  165698 cri.go:89] found id: ""
	I0617 12:05:59.118222  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.118232  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:59.118240  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:59.118306  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:59.150286  165698 cri.go:89] found id: ""
	I0617 12:05:59.150315  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.150327  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:59.150335  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:59.150381  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:59.191426  165698 cri.go:89] found id: ""
	I0617 12:05:59.191450  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.191485  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:59.191493  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:59.191547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:59.224933  165698 cri.go:89] found id: ""
	I0617 12:05:59.224965  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.224974  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:59.224998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:59.225061  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:59.255929  165698 cri.go:89] found id: ""
	I0617 12:05:59.255956  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.255965  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:59.255971  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:59.256025  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:59.293072  165698 cri.go:89] found id: ""
	I0617 12:05:59.293097  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.293104  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:59.293114  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:59.293126  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:59.354240  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:59.354267  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:59.367715  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:59.367744  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:59.446352  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:59.446381  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:59.446396  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:59.528701  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:59.528738  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:02.071616  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:02.088050  165698 kubeadm.go:591] duration metric: took 4m3.493743262s to restartPrimaryControlPlane
	W0617 12:06:02.088159  165698 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0617 12:06:02.088194  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:06:02.552133  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:02.570136  165698 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:06:02.582299  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:06:02.594775  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:06:02.594809  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:06:02.594867  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:06:02.605875  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:06:02.605954  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:06:02.617780  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:06:02.628284  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:06:02.628359  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:06:02.639128  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:06:02.650079  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:06:02.650144  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:06:02.660879  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:06:02.671170  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:06:02.671249  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:06:02.682071  165698 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:06:02.753750  165698 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0617 12:06:02.753913  165698 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:06:02.897384  165698 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:06:02.897530  165698 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:06:02.897685  165698 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:06:03.079116  165698 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:06:00.764533  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:00.781564  165060 api_server.go:72] duration metric: took 4m14.875617542s to wait for apiserver process to appear ...
	I0617 12:06:00.781593  165060 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:06:00.781642  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:00.781706  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:00.817980  165060 cri.go:89] found id: "5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:00.818013  165060 cri.go:89] found id: ""
	I0617 12:06:00.818024  165060 logs.go:276] 1 containers: [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3]
	I0617 12:06:00.818080  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.822664  165060 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:00.822759  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:00.861518  165060 cri.go:89] found id: "fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:00.861545  165060 cri.go:89] found id: ""
	I0617 12:06:00.861556  165060 logs.go:276] 1 containers: [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9]
	I0617 12:06:00.861614  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.865885  165060 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:00.865973  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:00.900844  165060 cri.go:89] found id: "c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:00.900864  165060 cri.go:89] found id: ""
	I0617 12:06:00.900875  165060 logs.go:276] 1 containers: [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7]
	I0617 12:06:00.900930  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.905253  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:00.905317  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:00.938998  165060 cri.go:89] found id: "157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:00.939036  165060 cri.go:89] found id: ""
	I0617 12:06:00.939046  165060 logs.go:276] 1 containers: [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d]
	I0617 12:06:00.939114  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.943170  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:00.943234  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:00.982923  165060 cri.go:89] found id: "c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:00.982953  165060 cri.go:89] found id: ""
	I0617 12:06:00.982964  165060 logs.go:276] 1 containers: [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d]
	I0617 12:06:00.983034  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.987696  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:00.987769  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:01.033789  165060 cri.go:89] found id: "2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:06:01.033825  165060 cri.go:89] found id: ""
	I0617 12:06:01.033837  165060 logs.go:276] 1 containers: [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079]
	I0617 12:06:01.033901  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:01.038800  165060 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:01.038861  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:01.077797  165060 cri.go:89] found id: ""
	I0617 12:06:01.077834  165060 logs.go:276] 0 containers: []
	W0617 12:06:01.077846  165060 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:01.077855  165060 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:01.077916  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:01.116275  165060 cri.go:89] found id: "02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:01.116296  165060 cri.go:89] found id: "7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:01.116303  165060 cri.go:89] found id: ""
	I0617 12:06:01.116311  165060 logs.go:276] 2 containers: [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36]
	I0617 12:06:01.116365  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:01.121088  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:01.125393  165060 logs.go:123] Gathering logs for container status ...
	I0617 12:06:01.125417  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:01.170817  165060 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:01.170844  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:01.223072  165060 logs.go:123] Gathering logs for kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] ...
	I0617 12:06:01.223114  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:01.269212  165060 logs.go:123] Gathering logs for kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] ...
	I0617 12:06:01.269245  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:01.313518  165060 logs.go:123] Gathering logs for kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] ...
	I0617 12:06:01.313557  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:01.357935  165060 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:01.357965  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:01.784493  165060 logs.go:123] Gathering logs for storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] ...
	I0617 12:06:01.784542  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:01.825824  165060 logs.go:123] Gathering logs for storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] ...
	I0617 12:06:01.825851  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:01.866216  165060 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:01.866252  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:01.881292  165060 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:01.881316  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:02.000026  165060 logs.go:123] Gathering logs for etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] ...
	I0617 12:06:02.000063  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:02.043491  165060 logs.go:123] Gathering logs for coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] ...
	I0617 12:06:02.043524  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:02.081957  165060 logs.go:123] Gathering logs for kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] ...
	I0617 12:06:02.081984  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:05:59.835769  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:02.332739  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:03.080903  165698 out.go:204]   - Generating certificates and keys ...
	I0617 12:06:03.081006  165698 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:06:03.081080  165698 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:06:03.081168  165698 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:06:03.081250  165698 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:06:03.081377  165698 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:06:03.081457  165698 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:06:03.082418  165698 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:06:03.083003  165698 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:06:03.083917  165698 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:06:03.084820  165698 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:06:03.085224  165698 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:06:03.085307  165698 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:06:03.203342  165698 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:06:03.430428  165698 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:06:03.570422  165698 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:06:03.772092  165698 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:06:03.793105  165698 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:06:03.793206  165698 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:06:03.793261  165698 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:06:03.919738  165698 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:06:04.333408  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:06.333963  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:03.921593  165698 out.go:204]   - Booting up control plane ...
	I0617 12:06:03.921708  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:06:03.928168  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:06:03.928279  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:06:03.937197  165698 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:06:03.939967  165698 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 12:06:04.644102  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:06:04.648733  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 200:
	ok
	I0617 12:06:04.649862  165060 api_server.go:141] control plane version: v1.30.1
	I0617 12:06:04.649894  165060 api_server.go:131] duration metric: took 3.86829173s to wait for apiserver health ...
	I0617 12:06:04.649905  165060 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:06:04.649936  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:04.649997  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:04.688904  165060 cri.go:89] found id: "5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:04.688923  165060 cri.go:89] found id: ""
	I0617 12:06:04.688931  165060 logs.go:276] 1 containers: [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3]
	I0617 12:06:04.688975  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.695049  165060 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:04.695110  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:04.730292  165060 cri.go:89] found id: "fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:04.730314  165060 cri.go:89] found id: ""
	I0617 12:06:04.730322  165060 logs.go:276] 1 containers: [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9]
	I0617 12:06:04.730373  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.734432  165060 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:04.734486  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:04.771401  165060 cri.go:89] found id: "c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:04.771418  165060 cri.go:89] found id: ""
	I0617 12:06:04.771426  165060 logs.go:276] 1 containers: [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7]
	I0617 12:06:04.771496  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.775822  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:04.775876  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:04.816111  165060 cri.go:89] found id: "157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:04.816131  165060 cri.go:89] found id: ""
	I0617 12:06:04.816139  165060 logs.go:276] 1 containers: [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d]
	I0617 12:06:04.816185  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.820614  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:04.820672  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:04.865387  165060 cri.go:89] found id: "c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:04.865411  165060 cri.go:89] found id: ""
	I0617 12:06:04.865421  165060 logs.go:276] 1 containers: [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d]
	I0617 12:06:04.865479  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.870192  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:04.870263  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:04.912698  165060 cri.go:89] found id: "2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:06:04.912723  165060 cri.go:89] found id: ""
	I0617 12:06:04.912734  165060 logs.go:276] 1 containers: [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079]
	I0617 12:06:04.912796  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.917484  165060 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:04.917563  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:04.954076  165060 cri.go:89] found id: ""
	I0617 12:06:04.954109  165060 logs.go:276] 0 containers: []
	W0617 12:06:04.954120  165060 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:04.954129  165060 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:04.954196  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:04.995832  165060 cri.go:89] found id: "02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:04.995858  165060 cri.go:89] found id: "7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:04.995862  165060 cri.go:89] found id: ""
	I0617 12:06:04.995869  165060 logs.go:276] 2 containers: [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36]
	I0617 12:06:04.995928  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:05.000741  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:05.004995  165060 logs.go:123] Gathering logs for storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] ...
	I0617 12:06:05.005026  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:05.040651  165060 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:05.040692  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:05.461644  165060 logs.go:123] Gathering logs for container status ...
	I0617 12:06:05.461685  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:05.508706  165060 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:05.508733  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:05.562418  165060 logs.go:123] Gathering logs for kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] ...
	I0617 12:06:05.562461  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:05.606489  165060 logs.go:123] Gathering logs for etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] ...
	I0617 12:06:05.606527  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:05.651719  165060 logs.go:123] Gathering logs for coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] ...
	I0617 12:06:05.651753  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:05.688736  165060 logs.go:123] Gathering logs for kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] ...
	I0617 12:06:05.688772  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:05.730649  165060 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:05.730679  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:05.745482  165060 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:05.745511  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:05.849002  165060 logs.go:123] Gathering logs for kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] ...
	I0617 12:06:05.849025  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:05.890802  165060 logs.go:123] Gathering logs for kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] ...
	I0617 12:06:05.890836  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:06:05.946444  165060 logs.go:123] Gathering logs for storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] ...
	I0617 12:06:05.946474  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:04.332977  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:06.834683  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:08.489561  165060 system_pods.go:59] 8 kube-system pods found
	I0617 12:06:08.489593  165060 system_pods.go:61] "coredns-7db6d8ff4d-9bbjg" [1ba0eee5-436e-4c83-b5ce-3c907d66b641] Running
	I0617 12:06:08.489597  165060 system_pods.go:61] "etcd-embed-certs-136195" [6dc81a80-c56b-4517-af82-c450cf9578f5] Running
	I0617 12:06:08.489601  165060 system_pods.go:61] "kube-apiserver-embed-certs-136195" [bd61a715-2471-4dca-aa48-a157531ebd6b] Running
	I0617 12:06:08.489605  165060 system_pods.go:61] "kube-controller-manager-embed-certs-136195" [194db4b0-75c2-4905-8e4d-813185497b51] Running
	I0617 12:06:08.489607  165060 system_pods.go:61] "kube-proxy-25d5n" [52b6d09a-899f-40c4-b1f3-7842ae755165] Running
	I0617 12:06:08.489610  165060 system_pods.go:61] "kube-scheduler-embed-certs-136195" [b04d3798-f465-4f82-9ec7-777ea62d5b94] Running
	I0617 12:06:08.489616  165060 system_pods.go:61] "metrics-server-569cc877fc-dmhfs" [31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:08.489620  165060 system_pods.go:61] "storage-provisioner" [4b04a38a-5006-4496-a24d-0940029193de] Running
	I0617 12:06:08.489626  165060 system_pods.go:74] duration metric: took 3.839715717s to wait for pod list to return data ...
	I0617 12:06:08.489633  165060 default_sa.go:34] waiting for default service account to be created ...
	I0617 12:06:08.491984  165060 default_sa.go:45] found service account: "default"
	I0617 12:06:08.492007  165060 default_sa.go:55] duration metric: took 2.365306ms for default service account to be created ...
	I0617 12:06:08.492014  165060 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 12:06:08.497834  165060 system_pods.go:86] 8 kube-system pods found
	I0617 12:06:08.497865  165060 system_pods.go:89] "coredns-7db6d8ff4d-9bbjg" [1ba0eee5-436e-4c83-b5ce-3c907d66b641] Running
	I0617 12:06:08.497873  165060 system_pods.go:89] "etcd-embed-certs-136195" [6dc81a80-c56b-4517-af82-c450cf9578f5] Running
	I0617 12:06:08.497880  165060 system_pods.go:89] "kube-apiserver-embed-certs-136195" [bd61a715-2471-4dca-aa48-a157531ebd6b] Running
	I0617 12:06:08.497887  165060 system_pods.go:89] "kube-controller-manager-embed-certs-136195" [194db4b0-75c2-4905-8e4d-813185497b51] Running
	I0617 12:06:08.497891  165060 system_pods.go:89] "kube-proxy-25d5n" [52b6d09a-899f-40c4-b1f3-7842ae755165] Running
	I0617 12:06:08.497899  165060 system_pods.go:89] "kube-scheduler-embed-certs-136195" [b04d3798-f465-4f82-9ec7-777ea62d5b94] Running
	I0617 12:06:08.497905  165060 system_pods.go:89] "metrics-server-569cc877fc-dmhfs" [31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:08.497914  165060 system_pods.go:89] "storage-provisioner" [4b04a38a-5006-4496-a24d-0940029193de] Running
	I0617 12:06:08.497921  165060 system_pods.go:126] duration metric: took 5.901391ms to wait for k8s-apps to be running ...
	I0617 12:06:08.497927  165060 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 12:06:08.497970  165060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:08.520136  165060 system_svc.go:56] duration metric: took 22.203601ms WaitForService to wait for kubelet
	I0617 12:06:08.520159  165060 kubeadm.go:576] duration metric: took 4m22.614222011s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:06:08.520178  165060 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:06:08.522704  165060 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:06:08.522741  165060 node_conditions.go:123] node cpu capacity is 2
	I0617 12:06:08.522758  165060 node_conditions.go:105] duration metric: took 2.57391ms to run NodePressure ...
	I0617 12:06:08.522773  165060 start.go:240] waiting for startup goroutines ...
	I0617 12:06:08.522787  165060 start.go:245] waiting for cluster config update ...
	I0617 12:06:08.522803  165060 start.go:254] writing updated cluster config ...
	I0617 12:06:08.523139  165060 ssh_runner.go:195] Run: rm -f paused
	I0617 12:06:08.577942  165060 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 12:06:08.579946  165060 out.go:177] * Done! kubectl is now configured to use "embed-certs-136195" cluster and "default" namespace by default
	I0617 12:06:08.334463  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:10.335642  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:09.331628  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:11.332586  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:13.332703  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:12.834827  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:15.334721  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:15.333004  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:17.834357  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:17.833756  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:19.835364  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:22.333742  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:20.332127  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:22.832111  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:24.333945  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:26.335021  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:25.332366  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:27.835364  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:28.833758  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:31.334155  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:29.835500  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:32.332236  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:33.833599  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:35.834190  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:34.831122  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:36.833202  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:38.334352  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:40.335399  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:40.335423  166103 pod_ready.go:81] duration metric: took 4m0.008367222s for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	E0617 12:06:40.335433  166103 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0617 12:06:40.335441  166103 pod_ready.go:38] duration metric: took 4m7.419505963s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:06:40.335475  166103 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:06:40.335505  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:40.335556  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:40.400354  166103 cri.go:89] found id: "5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:40.400384  166103 cri.go:89] found id: ""
	I0617 12:06:40.400394  166103 logs.go:276] 1 containers: [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b]
	I0617 12:06:40.400453  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.405124  166103 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:40.405186  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:40.440583  166103 cri.go:89] found id: "8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:40.440610  166103 cri.go:89] found id: ""
	I0617 12:06:40.440619  166103 logs.go:276] 1 containers: [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862]
	I0617 12:06:40.440665  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.445086  166103 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:40.445141  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:40.489676  166103 cri.go:89] found id: "26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:40.489698  166103 cri.go:89] found id: ""
	I0617 12:06:40.489706  166103 logs.go:276] 1 containers: [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323]
	I0617 12:06:40.489752  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.494402  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:40.494514  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:40.535486  166103 cri.go:89] found id: "2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:40.535517  166103 cri.go:89] found id: ""
	I0617 12:06:40.535527  166103 logs.go:276] 1 containers: [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b]
	I0617 12:06:40.535589  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.543265  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:40.543330  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:40.579564  166103 cri.go:89] found id: "63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:40.579588  166103 cri.go:89] found id: ""
	I0617 12:06:40.579598  166103 logs.go:276] 1 containers: [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da]
	I0617 12:06:40.579658  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.583865  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:40.583928  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:40.642408  166103 cri.go:89] found id: "36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:40.642435  166103 cri.go:89] found id: ""
	I0617 12:06:40.642445  166103 logs.go:276] 1 containers: [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685]
	I0617 12:06:40.642509  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.647892  166103 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:40.647959  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:40.698654  166103 cri.go:89] found id: ""
	I0617 12:06:40.698686  166103 logs.go:276] 0 containers: []
	W0617 12:06:40.698696  166103 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:40.698704  166103 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:40.698768  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:40.749641  166103 cri.go:89] found id: "adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:40.749663  166103 cri.go:89] found id: "e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:40.749668  166103 cri.go:89] found id: ""
	I0617 12:06:40.749678  166103 logs.go:276] 2 containers: [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc]
	I0617 12:06:40.749742  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.754926  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.760126  166103 logs.go:123] Gathering logs for container status ...
	I0617 12:06:40.760152  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:40.804119  166103 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:40.804159  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:40.942459  166103 logs.go:123] Gathering logs for etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] ...
	I0617 12:06:40.942495  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:40.994721  166103 logs.go:123] Gathering logs for coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] ...
	I0617 12:06:40.994761  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:41.037005  166103 logs.go:123] Gathering logs for kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] ...
	I0617 12:06:41.037040  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:41.080715  166103 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:41.080751  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:41.606478  166103 logs.go:123] Gathering logs for storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] ...
	I0617 12:06:41.606516  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:41.643963  166103 logs.go:123] Gathering logs for storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] ...
	I0617 12:06:41.644003  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:41.683405  166103 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:41.683443  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:41.737365  166103 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:41.737400  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:41.752552  166103 logs.go:123] Gathering logs for kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] ...
	I0617 12:06:41.752582  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:41.804447  166103 logs.go:123] Gathering logs for kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] ...
	I0617 12:06:41.804480  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:41.847266  166103 logs.go:123] Gathering logs for kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] ...
	I0617 12:06:41.847302  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:39.333111  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:41.836327  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:44.408776  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:44.427500  166103 api_server.go:72] duration metric: took 4m19.25316479s to wait for apiserver process to appear ...
	I0617 12:06:44.427531  166103 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:06:44.427577  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:44.427634  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:44.466379  166103 cri.go:89] found id: "5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:44.466408  166103 cri.go:89] found id: ""
	I0617 12:06:44.466418  166103 logs.go:276] 1 containers: [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b]
	I0617 12:06:44.466481  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.470832  166103 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:44.470901  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:44.511689  166103 cri.go:89] found id: "8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:44.511713  166103 cri.go:89] found id: ""
	I0617 12:06:44.511722  166103 logs.go:276] 1 containers: [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862]
	I0617 12:06:44.511769  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.516221  166103 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:44.516303  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:44.560612  166103 cri.go:89] found id: "26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:44.560634  166103 cri.go:89] found id: ""
	I0617 12:06:44.560642  166103 logs.go:276] 1 containers: [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323]
	I0617 12:06:44.560695  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.564998  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:44.565068  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:44.600133  166103 cri.go:89] found id: "2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:44.600155  166103 cri.go:89] found id: ""
	I0617 12:06:44.600164  166103 logs.go:276] 1 containers: [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b]
	I0617 12:06:44.600220  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.605431  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:44.605494  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:44.648647  166103 cri.go:89] found id: "63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:44.648678  166103 cri.go:89] found id: ""
	I0617 12:06:44.648688  166103 logs.go:276] 1 containers: [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da]
	I0617 12:06:44.648758  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.653226  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:44.653307  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:44.701484  166103 cri.go:89] found id: "36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:44.701508  166103 cri.go:89] found id: ""
	I0617 12:06:44.701516  166103 logs.go:276] 1 containers: [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685]
	I0617 12:06:44.701572  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.707827  166103 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:44.707890  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:44.752362  166103 cri.go:89] found id: ""
	I0617 12:06:44.752391  166103 logs.go:276] 0 containers: []
	W0617 12:06:44.752402  166103 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:44.752410  166103 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:44.752473  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:44.798926  166103 cri.go:89] found id: "adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:44.798955  166103 cri.go:89] found id: "e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:44.798961  166103 cri.go:89] found id: ""
	I0617 12:06:44.798970  166103 logs.go:276] 2 containers: [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc]
	I0617 12:06:44.799038  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.804702  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.810673  166103 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:44.810702  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:44.939596  166103 logs.go:123] Gathering logs for etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] ...
	I0617 12:06:44.939627  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:44.987902  166103 logs.go:123] Gathering logs for coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] ...
	I0617 12:06:44.987936  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:45.023931  166103 logs.go:123] Gathering logs for kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] ...
	I0617 12:06:45.023962  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:45.060432  166103 logs.go:123] Gathering logs for storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] ...
	I0617 12:06:45.060468  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:45.095643  166103 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:45.095679  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:45.553973  166103 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:45.554018  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:45.611997  166103 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:45.612036  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:45.626973  166103 logs.go:123] Gathering logs for container status ...
	I0617 12:06:45.627002  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:45.671119  166103 logs.go:123] Gathering logs for kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] ...
	I0617 12:06:45.671151  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:45.728097  166103 logs.go:123] Gathering logs for storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] ...
	I0617 12:06:45.728133  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:45.765586  166103 logs.go:123] Gathering logs for kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] ...
	I0617 12:06:45.765615  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:45.818347  166103 logs.go:123] Gathering logs for kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] ...
	I0617 12:06:45.818387  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:43.941225  165698 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0617 12:06:43.941341  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:43.941612  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:06:44.331481  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:46.831820  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:48.362826  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:06:48.366936  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 200:
	ok
	I0617 12:06:48.367973  166103 api_server.go:141] control plane version: v1.30.1
	I0617 12:06:48.367992  166103 api_server.go:131] duration metric: took 3.940452539s to wait for apiserver health ...
	I0617 12:06:48.367999  166103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:06:48.368021  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:48.368066  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:48.404797  166103 cri.go:89] found id: "5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:48.404819  166103 cri.go:89] found id: ""
	I0617 12:06:48.404828  166103 logs.go:276] 1 containers: [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b]
	I0617 12:06:48.404887  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.409105  166103 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:48.409162  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:48.456233  166103 cri.go:89] found id: "8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:48.456266  166103 cri.go:89] found id: ""
	I0617 12:06:48.456277  166103 logs.go:276] 1 containers: [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862]
	I0617 12:06:48.456336  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.460550  166103 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:48.460625  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:48.498447  166103 cri.go:89] found id: "26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:48.498472  166103 cri.go:89] found id: ""
	I0617 12:06:48.498481  166103 logs.go:276] 1 containers: [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323]
	I0617 12:06:48.498564  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.503826  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:48.503906  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:48.554405  166103 cri.go:89] found id: "2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:48.554435  166103 cri.go:89] found id: ""
	I0617 12:06:48.554446  166103 logs.go:276] 1 containers: [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b]
	I0617 12:06:48.554504  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.559175  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:48.559240  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:48.596764  166103 cri.go:89] found id: "63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:48.596791  166103 cri.go:89] found id: ""
	I0617 12:06:48.596801  166103 logs.go:276] 1 containers: [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da]
	I0617 12:06:48.596863  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.601197  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:48.601260  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:48.654027  166103 cri.go:89] found id: "36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:48.654053  166103 cri.go:89] found id: ""
	I0617 12:06:48.654061  166103 logs.go:276] 1 containers: [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685]
	I0617 12:06:48.654113  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.659492  166103 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:48.659557  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:48.706749  166103 cri.go:89] found id: ""
	I0617 12:06:48.706777  166103 logs.go:276] 0 containers: []
	W0617 12:06:48.706786  166103 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:48.706794  166103 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:48.706859  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:48.750556  166103 cri.go:89] found id: "adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:48.750588  166103 cri.go:89] found id: "e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:48.750594  166103 cri.go:89] found id: ""
	I0617 12:06:48.750607  166103 logs.go:276] 2 containers: [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc]
	I0617 12:06:48.750671  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.755368  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.760128  166103 logs.go:123] Gathering logs for kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] ...
	I0617 12:06:48.760154  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:48.802187  166103 logs.go:123] Gathering logs for etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] ...
	I0617 12:06:48.802224  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:48.861041  166103 logs.go:123] Gathering logs for kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] ...
	I0617 12:06:48.861076  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:48.917864  166103 logs.go:123] Gathering logs for storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] ...
	I0617 12:06:48.917902  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:48.963069  166103 logs.go:123] Gathering logs for container status ...
	I0617 12:06:48.963099  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:49.012109  166103 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:49.012149  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:49.119880  166103 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:49.119915  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:49.136461  166103 logs.go:123] Gathering logs for coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] ...
	I0617 12:06:49.136497  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:49.177339  166103 logs.go:123] Gathering logs for kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] ...
	I0617 12:06:49.177377  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:49.219101  166103 logs.go:123] Gathering logs for kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] ...
	I0617 12:06:49.219135  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:49.256646  166103 logs.go:123] Gathering logs for storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] ...
	I0617 12:06:49.256687  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:49.302208  166103 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:49.302243  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:49.653713  166103 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:49.653758  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:52.217069  166103 system_pods.go:59] 8 kube-system pods found
	I0617 12:06:52.217102  166103 system_pods.go:61] "coredns-7db6d8ff4d-mnw24" [1e6c4ff3-f0dc-43da-abd8-baaed7dca40c] Running
	I0617 12:06:52.217107  166103 system_pods.go:61] "etcd-default-k8s-diff-port-991309" [820a4f27-cf83-4edb-a2ea-edba6673d851] Running
	I0617 12:06:52.217111  166103 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991309" [26e6c19d-6f70-4924-83f5-563c8508c9e3] Running
	I0617 12:06:52.217115  166103 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991309" [01e7c468-98a6-48f3-a158-59e97fa8279c] Running
	I0617 12:06:52.217119  166103 system_pods.go:61] "kube-proxy-jn5kp" [d6935148-7ee8-4655-8327-9f1ee4c933de] Running
	I0617 12:06:52.217122  166103 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991309" [53ecd22c-05cf-48a5-b7e5-925392085f7a] Running
	I0617 12:06:52.217128  166103 system_pods.go:61] "metrics-server-569cc877fc-n2svp" [5b637d97-3183-4324-98cf-dd69a2968578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:52.217134  166103 system_pods.go:61] "storage-provisioner" [92b20aec-29c2-4256-86be-7f58f66585dd] Running
	I0617 12:06:52.217145  166103 system_pods.go:74] duration metric: took 3.849140024s to wait for pod list to return data ...
	I0617 12:06:52.217152  166103 default_sa.go:34] waiting for default service account to be created ...
	I0617 12:06:52.219308  166103 default_sa.go:45] found service account: "default"
	I0617 12:06:52.219330  166103 default_sa.go:55] duration metric: took 2.172323ms for default service account to be created ...
	I0617 12:06:52.219339  166103 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 12:06:52.224239  166103 system_pods.go:86] 8 kube-system pods found
	I0617 12:06:52.224265  166103 system_pods.go:89] "coredns-7db6d8ff4d-mnw24" [1e6c4ff3-f0dc-43da-abd8-baaed7dca40c] Running
	I0617 12:06:52.224270  166103 system_pods.go:89] "etcd-default-k8s-diff-port-991309" [820a4f27-cf83-4edb-a2ea-edba6673d851] Running
	I0617 12:06:52.224276  166103 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-991309" [26e6c19d-6f70-4924-83f5-563c8508c9e3] Running
	I0617 12:06:52.224280  166103 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-991309" [01e7c468-98a6-48f3-a158-59e97fa8279c] Running
	I0617 12:06:52.224284  166103 system_pods.go:89] "kube-proxy-jn5kp" [d6935148-7ee8-4655-8327-9f1ee4c933de] Running
	I0617 12:06:52.224288  166103 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-991309" [53ecd22c-05cf-48a5-b7e5-925392085f7a] Running
	I0617 12:06:52.224299  166103 system_pods.go:89] "metrics-server-569cc877fc-n2svp" [5b637d97-3183-4324-98cf-dd69a2968578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:52.224305  166103 system_pods.go:89] "storage-provisioner" [92b20aec-29c2-4256-86be-7f58f66585dd] Running
	I0617 12:06:52.224319  166103 system_pods.go:126] duration metric: took 4.973603ms to wait for k8s-apps to be running ...
	I0617 12:06:52.224332  166103 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 12:06:52.224380  166103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:52.241121  166103 system_svc.go:56] duration metric: took 16.776061ms WaitForService to wait for kubelet
	I0617 12:06:52.241156  166103 kubeadm.go:576] duration metric: took 4m27.066827271s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:06:52.241181  166103 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:06:52.245359  166103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:06:52.245407  166103 node_conditions.go:123] node cpu capacity is 2
	I0617 12:06:52.245423  166103 node_conditions.go:105] duration metric: took 4.235898ms to run NodePressure ...
	I0617 12:06:52.245440  166103 start.go:240] waiting for startup goroutines ...
	I0617 12:06:52.245449  166103 start.go:245] waiting for cluster config update ...
	I0617 12:06:52.245462  166103 start.go:254] writing updated cluster config ...
	I0617 12:06:52.245969  166103 ssh_runner.go:195] Run: rm -f paused
	I0617 12:06:52.299326  166103 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 12:06:52.301413  166103 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-991309" cluster and "default" namespace by default
	I0617 12:06:48.942159  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:48.942434  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:06:48.835113  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:51.331395  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:53.331551  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:55.332455  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:57.835143  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:58.942977  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:58.943290  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:00.331823  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:02.332214  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:04.831284  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:06.832082  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:07.325414  164809 pod_ready.go:81] duration metric: took 4m0.000322555s for pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace to be "Ready" ...
	E0617 12:07:07.325446  164809 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0617 12:07:07.325464  164809 pod_ready.go:38] duration metric: took 4m12.035995337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:07:07.325494  164809 kubeadm.go:591] duration metric: took 4m19.041266463s to restartPrimaryControlPlane
	W0617 12:07:07.325556  164809 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0617 12:07:07.325587  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:07:18.944149  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:07:18.944368  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:38.980378  164809 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.654762508s)
	I0617 12:07:38.980451  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:07:38.997845  164809 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:07:39.009456  164809 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:07:39.020407  164809 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:07:39.020430  164809 kubeadm.go:156] found existing configuration files:
	
	I0617 12:07:39.020472  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:07:39.030323  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:07:39.030376  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:07:39.040298  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:07:39.049715  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:07:39.049757  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:07:39.060493  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:07:39.069921  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:07:39.069973  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:07:39.080049  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:07:39.089524  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:07:39.089569  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:07:39.099082  164809 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:07:39.154963  164809 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0617 12:07:39.155083  164809 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:07:39.286616  164809 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:07:39.286809  164809 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:07:39.286977  164809 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:07:39.487542  164809 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:07:39.489554  164809 out.go:204]   - Generating certificates and keys ...
	I0617 12:07:39.489665  164809 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:07:39.489732  164809 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:07:39.489855  164809 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:07:39.489969  164809 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:07:39.490088  164809 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:07:39.490187  164809 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:07:39.490274  164809 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:07:39.490386  164809 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:07:39.490508  164809 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:07:39.490643  164809 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:07:39.490750  164809 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:07:39.490849  164809 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:07:39.565788  164809 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:07:39.643443  164809 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0617 12:07:39.765615  164809 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:07:39.851182  164809 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:07:40.041938  164809 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:07:40.042576  164809 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:07:40.045112  164809 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:07:40.047144  164809 out.go:204]   - Booting up control plane ...
	I0617 12:07:40.047265  164809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:07:40.047374  164809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:07:40.047995  164809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:07:40.070163  164809 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:07:40.071308  164809 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:07:40.071415  164809 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:07:40.204578  164809 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0617 12:07:40.204698  164809 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0617 12:07:41.210782  164809 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.0065421s
	I0617 12:07:41.210902  164809 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0617 12:07:45.713194  164809 kubeadm.go:309] [api-check] The API server is healthy after 4.501871798s
	I0617 12:07:45.735311  164809 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0617 12:07:45.760405  164809 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0617 12:07:45.795429  164809 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0617 12:07:45.795770  164809 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-152830 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0617 12:07:45.816446  164809 kubeadm.go:309] [bootstrap-token] Using token: ryfqxd.olkegn8a1unpvnbq
	I0617 12:07:45.817715  164809 out.go:204]   - Configuring RBAC rules ...
	I0617 12:07:45.817890  164809 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0617 12:07:45.826422  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0617 12:07:45.852291  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0617 12:07:45.867538  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0617 12:07:45.880697  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0617 12:07:45.887707  164809 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0617 12:07:46.120211  164809 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0617 12:07:46.593168  164809 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0617 12:07:47.119377  164809 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0617 12:07:47.120840  164809 kubeadm.go:309] 
	I0617 12:07:47.120933  164809 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0617 12:07:47.120947  164809 kubeadm.go:309] 
	I0617 12:07:47.121057  164809 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0617 12:07:47.121069  164809 kubeadm.go:309] 
	I0617 12:07:47.121123  164809 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0617 12:07:47.124361  164809 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0617 12:07:47.124443  164809 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0617 12:07:47.124464  164809 kubeadm.go:309] 
	I0617 12:07:47.124538  164809 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0617 12:07:47.124550  164809 kubeadm.go:309] 
	I0617 12:07:47.124607  164809 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0617 12:07:47.124617  164809 kubeadm.go:309] 
	I0617 12:07:47.124724  164809 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0617 12:07:47.124838  164809 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0617 12:07:47.124938  164809 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0617 12:07:47.124949  164809 kubeadm.go:309] 
	I0617 12:07:47.125085  164809 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0617 12:07:47.125191  164809 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0617 12:07:47.125203  164809 kubeadm.go:309] 
	I0617 12:07:47.125343  164809 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ryfqxd.olkegn8a1unpvnbq \
	I0617 12:07:47.125479  164809 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 \
	I0617 12:07:47.125510  164809 kubeadm.go:309] 	--control-plane 
	I0617 12:07:47.125518  164809 kubeadm.go:309] 
	I0617 12:07:47.125616  164809 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0617 12:07:47.125627  164809 kubeadm.go:309] 
	I0617 12:07:47.125724  164809 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ryfqxd.olkegn8a1unpvnbq \
	I0617 12:07:47.125852  164809 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 
	I0617 12:07:47.126915  164809 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:07:47.126966  164809 cni.go:84] Creating CNI manager for ""
	I0617 12:07:47.126983  164809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:07:47.128899  164809 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:07:47.130229  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:07:47.142301  164809 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:07:47.163380  164809 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:07:47.163500  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:47.163503  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-152830 minikube.k8s.io/updated_at=2024_06_17T12_07_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6 minikube.k8s.io/name=no-preload-152830 minikube.k8s.io/primary=true
	I0617 12:07:47.375089  164809 ops.go:34] apiserver oom_adj: -16
	I0617 12:07:47.375266  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:47.875477  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:48.375626  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:48.876185  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:49.375621  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:49.875597  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:50.376188  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:50.875983  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:51.375537  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:51.876321  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:52.375920  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:52.876348  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:53.375623  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:53.875369  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:54.375747  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:54.875581  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:55.376244  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:55.875866  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:56.376285  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:56.876228  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:57.375990  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:57.875392  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:58.946943  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:07:58.947220  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:58.947233  165698 kubeadm.go:309] 
	I0617 12:07:58.947316  165698 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0617 12:07:58.947393  165698 kubeadm.go:309] 		timed out waiting for the condition
	I0617 12:07:58.947406  165698 kubeadm.go:309] 
	I0617 12:07:58.947449  165698 kubeadm.go:309] 	This error is likely caused by:
	I0617 12:07:58.947528  165698 kubeadm.go:309] 		- The kubelet is not running
	I0617 12:07:58.947690  165698 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0617 12:07:58.947699  165698 kubeadm.go:309] 
	I0617 12:07:58.947860  165698 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0617 12:07:58.947924  165698 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0617 12:07:58.947976  165698 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0617 12:07:58.947991  165698 kubeadm.go:309] 
	I0617 12:07:58.948132  165698 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0617 12:07:58.948247  165698 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0617 12:07:58.948260  165698 kubeadm.go:309] 
	I0617 12:07:58.948406  165698 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0617 12:07:58.948539  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0617 12:07:58.948639  165698 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0617 12:07:58.948740  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0617 12:07:58.948750  165698 kubeadm.go:309] 
	I0617 12:07:58.949270  165698 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:07:58.949403  165698 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0617 12:07:58.949508  165698 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0617 12:07:58.949630  165698 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0617 12:07:58.949694  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:07:59.418622  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:07:59.435367  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:07:59.449365  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:07:59.449384  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:07:59.449430  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:07:59.461411  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:07:59.461478  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:07:59.471262  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:07:59.480591  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:07:59.480640  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:07:59.490152  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:07:59.499248  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:07:59.499300  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:07:59.508891  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:07:59.518114  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:07:59.518152  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:07:59.528190  165698 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:07:59.592831  165698 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0617 12:07:59.592949  165698 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:07:59.752802  165698 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:07:59.752947  165698 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:07:59.753079  165698 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:07:59.984221  165698 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:07:58.375522  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:58.876221  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:59.375941  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:59.875924  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:08:00.063788  164809 kubeadm.go:1107] duration metric: took 12.900376954s to wait for elevateKubeSystemPrivileges
	W0617 12:08:00.063860  164809 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0617 12:08:00.063871  164809 kubeadm.go:393] duration metric: took 5m11.831587226s to StartCluster
	I0617 12:08:00.063895  164809 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:08:00.063996  164809 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:08:00.066593  164809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:08:00.066922  164809 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:08:00.068556  164809 out.go:177] * Verifying Kubernetes components...
	I0617 12:08:00.067029  164809 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:08:00.067131  164809 config.go:182] Loaded profile config "no-preload-152830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:08:00.069969  164809 addons.go:69] Setting storage-provisioner=true in profile "no-preload-152830"
	I0617 12:08:00.069983  164809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:08:00.069992  164809 addons.go:69] Setting metrics-server=true in profile "no-preload-152830"
	I0617 12:08:00.070015  164809 addons.go:234] Setting addon metrics-server=true in "no-preload-152830"
	I0617 12:08:00.070014  164809 addons.go:234] Setting addon storage-provisioner=true in "no-preload-152830"
	W0617 12:08:00.070021  164809 addons.go:243] addon metrics-server should already be in state true
	W0617 12:08:00.070024  164809 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:08:00.070055  164809 host.go:66] Checking if "no-preload-152830" exists ...
	I0617 12:08:00.070057  164809 host.go:66] Checking if "no-preload-152830" exists ...
	I0617 12:08:00.069984  164809 addons.go:69] Setting default-storageclass=true in profile "no-preload-152830"
	I0617 12:08:00.070116  164809 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-152830"
	I0617 12:08:00.070426  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.070428  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.070443  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.070451  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.070475  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.070494  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.088451  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I0617 12:08:00.089105  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.089673  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.089700  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.090074  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.090673  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.090723  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.091118  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0617 12:08:00.091150  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I0617 12:08:00.091756  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.091880  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.092306  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.092327  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.092470  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.092487  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.093006  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.093081  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.093169  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.093683  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.093722  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.096819  164809 addons.go:234] Setting addon default-storageclass=true in "no-preload-152830"
	W0617 12:08:00.096839  164809 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:08:00.096868  164809 host.go:66] Checking if "no-preload-152830" exists ...
	I0617 12:08:00.097223  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.097252  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.110063  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33623
	I0617 12:08:00.110843  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.111489  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.111509  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.112419  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.112633  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.112859  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39555
	I0617 12:08:00.113245  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.113927  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.113946  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.114470  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.114758  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:08:00.116377  164809 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:08:00.115146  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.117266  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I0617 12:08:00.117647  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:08:00.117663  164809 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:08:00.117674  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.117681  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:08:00.118504  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.119076  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.119091  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.119440  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.119755  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.121396  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.121620  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:08:00.123146  164809 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:07:59.986165  165698 out.go:204]   - Generating certificates and keys ...
	I0617 12:07:59.986270  165698 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:07:59.986391  165698 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:07:59.986522  165698 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:07:59.986606  165698 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:07:59.986717  165698 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:07:59.986795  165698 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:07:59.986887  165698 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:07:59.986972  165698 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:07:59.987081  165698 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:07:59.987191  165698 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:07:59.987250  165698 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:07:59.987331  165698 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:08:00.155668  165698 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:08:00.303780  165698 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:08:00.369907  165698 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:08:00.506550  165698 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:08:00.529943  165698 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:08:00.531684  165698 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:08:00.531756  165698 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:08:00.667972  165698 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:08:00.122003  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:08:00.122146  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:08:00.124748  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.124895  164809 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:08:00.124914  164809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:08:00.124934  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:08:00.124957  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:08:00.125142  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:08:00.125446  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:08:00.128559  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.128991  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:08:00.129011  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.129239  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:08:00.129434  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:08:00.129537  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:08:00.129640  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:08:00.142435  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39073
	I0617 12:08:00.142915  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.143550  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.143583  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.143946  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.144168  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.145972  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:08:00.146165  164809 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:08:00.146178  164809 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:08:00.146196  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:08:00.149316  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.149720  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:08:00.149743  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.149926  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:08:00.150106  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:08:00.150273  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:08:00.150434  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:08:00.294731  164809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:08:00.317727  164809 node_ready.go:35] waiting up to 6m0s for node "no-preload-152830" to be "Ready" ...
	I0617 12:08:00.346507  164809 node_ready.go:49] node "no-preload-152830" has status "Ready":"True"
	I0617 12:08:00.346533  164809 node_ready.go:38] duration metric: took 28.776898ms for node "no-preload-152830" to be "Ready" ...
	I0617 12:08:00.346544  164809 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:08:00.404097  164809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:00.412303  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:08:00.412325  164809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:08:00.415269  164809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:08:00.438024  164809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:08:00.514528  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:08:00.514561  164809 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:08:00.629109  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:08:00.629141  164809 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:08:00.677084  164809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:08:01.113979  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.114007  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.114432  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.114445  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.114507  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.114526  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.114536  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.114846  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.114866  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.117124  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.117141  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.117437  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.117457  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.117478  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.117496  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.117508  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.117821  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.117858  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.117882  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.125648  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.125668  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.125998  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.126020  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.126030  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.325217  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.325242  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.325579  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.325633  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.325669  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.325669  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.325682  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.325960  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.325977  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.326007  164809 addons.go:475] Verifying addon metrics-server=true in "no-preload-152830"
	I0617 12:08:01.326037  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.327744  164809 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0617 12:08:00.671036  165698 out.go:204]   - Booting up control plane ...
	I0617 12:08:00.671171  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:08:00.677241  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:08:00.678999  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:08:00.681119  165698 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:08:00.684535  165698 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 12:08:01.329155  164809 addons.go:510] duration metric: took 1.262127108s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0617 12:08:02.425731  164809 pod_ready.go:102] pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace has status "Ready":"False"
	I0617 12:08:03.910467  164809 pod_ready.go:92] pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.910494  164809 pod_ready.go:81] duration metric: took 3.506370946s for pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.910508  164809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vz7dg" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.916309  164809 pod_ready.go:92] pod "coredns-7db6d8ff4d-vz7dg" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.916331  164809 pod_ready.go:81] duration metric: took 5.814812ms for pod "coredns-7db6d8ff4d-vz7dg" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.916340  164809 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.920834  164809 pod_ready.go:92] pod "etcd-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.920862  164809 pod_ready.go:81] duration metric: took 4.51438ms for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.920874  164809 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.924955  164809 pod_ready.go:92] pod "kube-apiserver-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.924973  164809 pod_ready.go:81] duration metric: took 4.09301ms for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.924982  164809 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.929301  164809 pod_ready.go:92] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.929318  164809 pod_ready.go:81] duration metric: took 4.33061ms for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.929326  164809 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:04.308546  164809 pod_ready.go:92] pod "kube-scheduler-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:04.308570  164809 pod_ready.go:81] duration metric: took 379.237147ms for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:04.308578  164809 pod_ready.go:38] duration metric: took 3.962022714s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:08:04.308594  164809 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:08:04.308644  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:08:04.327383  164809 api_server.go:72] duration metric: took 4.260420928s to wait for apiserver process to appear ...
	I0617 12:08:04.327408  164809 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:08:04.327426  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:08:04.332321  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0617 12:08:04.333390  164809 api_server.go:141] control plane version: v1.30.1
	I0617 12:08:04.333412  164809 api_server.go:131] duration metric: took 5.998312ms to wait for apiserver health ...
	I0617 12:08:04.333420  164809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:08:04.512267  164809 system_pods.go:59] 9 kube-system pods found
	I0617 12:08:04.512298  164809 system_pods.go:61] "coredns-7db6d8ff4d-gjt84" [979c7339-3a4c-4bc8-8586-4d9da42339ae] Running
	I0617 12:08:04.512302  164809 system_pods.go:61] "coredns-7db6d8ff4d-vz7dg" [53c5188e-bc44-4aed-a989-ef3e2379c27b] Running
	I0617 12:08:04.512306  164809 system_pods.go:61] "etcd-no-preload-152830" [2b82d709-0776-470a-a538-f132b84be2e0] Running
	I0617 12:08:04.512310  164809 system_pods.go:61] "kube-apiserver-no-preload-152830" [e40c7c7b-b029-4f65-ac36-f4ff95eabc23] Running
	I0617 12:08:04.512313  164809 system_pods.go:61] "kube-controller-manager-no-preload-152830" [c2adec58-05a4-4993-b9a3-28f9ef519a63] Running
	I0617 12:08:04.512317  164809 system_pods.go:61] "kube-proxy-6c4hm" [a9830236-af96-437f-ad07-494b25f1a90e] Running
	I0617 12:08:04.512319  164809 system_pods.go:61] "kube-scheduler-no-preload-152830" [876671da-097b-43c1-9055-95c2ed7620aa] Running
	I0617 12:08:04.512325  164809 system_pods.go:61] "metrics-server-569cc877fc-zllzk" [e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:08:04.512329  164809 system_pods.go:61] "storage-provisioner" [b6cc7cdc-43f4-40c4-a202-5674fcdcedd0] Running
	I0617 12:08:04.512340  164809 system_pods.go:74] duration metric: took 178.914377ms to wait for pod list to return data ...
	I0617 12:08:04.512347  164809 default_sa.go:34] waiting for default service account to be created ...
	I0617 12:08:04.707834  164809 default_sa.go:45] found service account: "default"
	I0617 12:08:04.707874  164809 default_sa.go:55] duration metric: took 195.518331ms for default service account to be created ...
	I0617 12:08:04.707886  164809 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 12:08:04.916143  164809 system_pods.go:86] 9 kube-system pods found
	I0617 12:08:04.916173  164809 system_pods.go:89] "coredns-7db6d8ff4d-gjt84" [979c7339-3a4c-4bc8-8586-4d9da42339ae] Running
	I0617 12:08:04.916178  164809 system_pods.go:89] "coredns-7db6d8ff4d-vz7dg" [53c5188e-bc44-4aed-a989-ef3e2379c27b] Running
	I0617 12:08:04.916183  164809 system_pods.go:89] "etcd-no-preload-152830" [2b82d709-0776-470a-a538-f132b84be2e0] Running
	I0617 12:08:04.916187  164809 system_pods.go:89] "kube-apiserver-no-preload-152830" [e40c7c7b-b029-4f65-ac36-f4ff95eabc23] Running
	I0617 12:08:04.916191  164809 system_pods.go:89] "kube-controller-manager-no-preload-152830" [c2adec58-05a4-4993-b9a3-28f9ef519a63] Running
	I0617 12:08:04.916195  164809 system_pods.go:89] "kube-proxy-6c4hm" [a9830236-af96-437f-ad07-494b25f1a90e] Running
	I0617 12:08:04.916199  164809 system_pods.go:89] "kube-scheduler-no-preload-152830" [876671da-097b-43c1-9055-95c2ed7620aa] Running
	I0617 12:08:04.916211  164809 system_pods.go:89] "metrics-server-569cc877fc-zllzk" [e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:08:04.916219  164809 system_pods.go:89] "storage-provisioner" [b6cc7cdc-43f4-40c4-a202-5674fcdcedd0] Running
	I0617 12:08:04.916231  164809 system_pods.go:126] duration metric: took 208.336851ms to wait for k8s-apps to be running ...
	I0617 12:08:04.916245  164809 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 12:08:04.916306  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:08:04.933106  164809 system_svc.go:56] duration metric: took 16.850122ms WaitForService to wait for kubelet
	I0617 12:08:04.933135  164809 kubeadm.go:576] duration metric: took 4.866178671s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:08:04.933159  164809 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:08:05.108094  164809 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:08:05.108120  164809 node_conditions.go:123] node cpu capacity is 2
	I0617 12:08:05.108133  164809 node_conditions.go:105] duration metric: took 174.968414ms to run NodePressure ...
	I0617 12:08:05.108148  164809 start.go:240] waiting for startup goroutines ...
	I0617 12:08:05.108160  164809 start.go:245] waiting for cluster config update ...
	I0617 12:08:05.108173  164809 start.go:254] writing updated cluster config ...
	I0617 12:08:05.108496  164809 ssh_runner.go:195] Run: rm -f paused
	I0617 12:08:05.160610  164809 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 12:08:05.162777  164809 out.go:177] * Done! kubectl is now configured to use "no-preload-152830" cluster and "default" namespace by default
	I0617 12:08:40.686610  165698 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0617 12:08:40.686950  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:40.687194  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:08:45.687594  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:45.687820  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:08:55.688285  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:55.688516  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:15.689306  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:09:15.689556  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:55.688872  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:09:55.689162  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:55.689206  165698 kubeadm.go:309] 
	I0617 12:09:55.689284  165698 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0617 12:09:55.689342  165698 kubeadm.go:309] 		timed out waiting for the condition
	I0617 12:09:55.689354  165698 kubeadm.go:309] 
	I0617 12:09:55.689418  165698 kubeadm.go:309] 	This error is likely caused by:
	I0617 12:09:55.689480  165698 kubeadm.go:309] 		- The kubelet is not running
	I0617 12:09:55.689632  165698 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0617 12:09:55.689657  165698 kubeadm.go:309] 
	I0617 12:09:55.689791  165698 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0617 12:09:55.689844  165698 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0617 12:09:55.689916  165698 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0617 12:09:55.689926  165698 kubeadm.go:309] 
	I0617 12:09:55.690059  165698 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0617 12:09:55.690140  165698 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0617 12:09:55.690159  165698 kubeadm.go:309] 
	I0617 12:09:55.690258  165698 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0617 12:09:55.690343  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0617 12:09:55.690434  165698 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0617 12:09:55.690530  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0617 12:09:55.690546  165698 kubeadm.go:309] 
	I0617 12:09:55.691495  165698 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:09:55.691595  165698 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0617 12:09:55.691708  165698 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0617 12:09:55.691787  165698 kubeadm.go:393] duration metric: took 7m57.151326537s to StartCluster
	I0617 12:09:55.691844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:09:55.691904  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:09:55.746514  165698 cri.go:89] found id: ""
	I0617 12:09:55.746550  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.746563  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:09:55.746572  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:09:55.746636  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:09:55.789045  165698 cri.go:89] found id: ""
	I0617 12:09:55.789083  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.789095  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:09:55.789103  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:09:55.789169  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:09:55.829492  165698 cri.go:89] found id: ""
	I0617 12:09:55.829533  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.829542  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:09:55.829547  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:09:55.829614  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:09:55.865213  165698 cri.go:89] found id: ""
	I0617 12:09:55.865246  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.865262  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:09:55.865267  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:09:55.865318  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:09:55.904067  165698 cri.go:89] found id: ""
	I0617 12:09:55.904102  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.904113  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:09:55.904122  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:09:55.904187  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:09:55.938441  165698 cri.go:89] found id: ""
	I0617 12:09:55.938471  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.938478  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:09:55.938487  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:09:55.938538  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:09:55.975669  165698 cri.go:89] found id: ""
	I0617 12:09:55.975710  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.975723  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:09:55.975731  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:09:55.975804  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:09:56.015794  165698 cri.go:89] found id: ""
	I0617 12:09:56.015826  165698 logs.go:276] 0 containers: []
	W0617 12:09:56.015837  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:09:56.015851  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:09:56.015868  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:09:56.095533  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:09:56.095557  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:09:56.095573  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:09:56.220817  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:09:56.220857  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:09:56.261470  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:09:56.261507  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:09:56.325626  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:09:56.325673  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0617 12:09:56.345438  165698 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0617 12:09:56.345491  165698 out.go:239] * 
	W0617 12:09:56.345606  165698 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0617 12:09:56.345635  165698 out.go:239] * 
	W0617 12:09:56.346583  165698 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 12:09:56.349928  165698 out.go:177] 
	W0617 12:09:56.351067  165698 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0617 12:09:56.351127  165698 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0617 12:09:56.351157  165698 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0617 12:09:56.352487  165698 out.go:177] 
	
	
	==> CRI-O <==
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.311327251Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626554311296735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3872547e-73ea-490f-a2ca-7673486e250b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.311834551Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=964a1b0a-3ec4-4476-ae06-3ba24182d195 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.311901647Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=964a1b0a-3ec4-4476-ae06-3ba24182d195 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.312076892Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfd335e5e905ceb4a84958b887f1f87c485fa58b5c2410528667b4584437377d,PodSandboxId:4ec5e51e33e3cedc6aefb9c3ee5d6391210baed29b05fc84acc385a62d4ad61f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718625759784458039,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30d10d01-c1de-435f-902e-5e90c86ab3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6d8e5583,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323,PodSandboxId:8a1b06c7196ef98910e1fd1444bc7cfe4dc58d4a078332029874c3879df5045b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718625758616393839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnw24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6c4ff3-f0dc-43da-abd8-baaed7dca40c,},Annotations:map[string]string{io.kubernetes.container.hash: a431f7a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195,PodSandboxId:e961aee4065637077a8ce4e59e5627f0c51458c18464ffd5d60b15f46a7b95aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718625743613384802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 92b20aec-29c2-4256-86be-7f58f66585dd,},Annotations:map[string]string{io.kubernetes.container.hash: 5155bfb6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da,PodSandboxId:0a7b4f113755c29d14cf67df0a593ef5c83b50b92ed3fa26a93a3fe94024b925,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718625742911563496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn5kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6935148-7
ee8-4655-8327-9f1ee4c933de,},Annotations:map[string]string{io.kubernetes.container.hash: ebf4cc3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc,PodSandboxId:e961aee4065637077a8ce4e59e5627f0c51458c18464ffd5d60b15f46a7b95aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718625742905247251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92b20aec-29c2-4256-86be
-7f58f66585dd,},Annotations:map[string]string{io.kubernetes.container.hash: 5155bfb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b,PodSandboxId:a3cec7d877da2c73dcc9614f367bf8f5a3f7d0a1d73be53db582ceb404b2d8d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625739247357549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c21bea80d5b9dcade35da
7b7545e61c7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685,PodSandboxId:8753042e3940c09ad40880a7040acf9ff18b04ea81902bfc864efb03cc277e8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625739221340680,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: aef2b9c920bd8998bd8f0b63747752dd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b,PodSandboxId:37d220d03ff98c32e8150017bc155aae33fc8cc0a551400e287958d263b84f70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625739177321487,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 85585af84dc6cf60f33336c0a1c5a11f,},Annotations:map[string]string{io.kubernetes.container.hash: 90b31d22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862,PodSandboxId:1835d921c3e05def4cdc131d68f2cbdd34f27229844719a02a01ea4f9bd5cbee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625739152392152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e049b2796061913144bf89c1454f5
f9,},Annotations:map[string]string{io.kubernetes.container.hash: fafef5fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=964a1b0a-3ec4-4476-ae06-3ba24182d195 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.350435292Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=842dd8bd-6176-4441-85cf-f78b9088aa75 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.350520869Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=842dd8bd-6176-4441-85cf-f78b9088aa75 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.351800157Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b2579eb1-e85d-4ac9-8c51-57fd5c98ef1c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.352320893Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626554352294177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2579eb1-e85d-4ac9-8c51-57fd5c98ef1c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.352889221Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b77662b7-142c-452e-938a-9948dbe2e8e3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.352959239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b77662b7-142c-452e-938a-9948dbe2e8e3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.353233586Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfd335e5e905ceb4a84958b887f1f87c485fa58b5c2410528667b4584437377d,PodSandboxId:4ec5e51e33e3cedc6aefb9c3ee5d6391210baed29b05fc84acc385a62d4ad61f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718625759784458039,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30d10d01-c1de-435f-902e-5e90c86ab3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6d8e5583,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323,PodSandboxId:8a1b06c7196ef98910e1fd1444bc7cfe4dc58d4a078332029874c3879df5045b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718625758616393839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnw24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6c4ff3-f0dc-43da-abd8-baaed7dca40c,},Annotations:map[string]string{io.kubernetes.container.hash: a431f7a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195,PodSandboxId:e961aee4065637077a8ce4e59e5627f0c51458c18464ffd5d60b15f46a7b95aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718625743613384802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 92b20aec-29c2-4256-86be-7f58f66585dd,},Annotations:map[string]string{io.kubernetes.container.hash: 5155bfb6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da,PodSandboxId:0a7b4f113755c29d14cf67df0a593ef5c83b50b92ed3fa26a93a3fe94024b925,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718625742911563496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn5kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6935148-7
ee8-4655-8327-9f1ee4c933de,},Annotations:map[string]string{io.kubernetes.container.hash: ebf4cc3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc,PodSandboxId:e961aee4065637077a8ce4e59e5627f0c51458c18464ffd5d60b15f46a7b95aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718625742905247251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92b20aec-29c2-4256-86be
-7f58f66585dd,},Annotations:map[string]string{io.kubernetes.container.hash: 5155bfb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b,PodSandboxId:a3cec7d877da2c73dcc9614f367bf8f5a3f7d0a1d73be53db582ceb404b2d8d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625739247357549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c21bea80d5b9dcade35da
7b7545e61c7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685,PodSandboxId:8753042e3940c09ad40880a7040acf9ff18b04ea81902bfc864efb03cc277e8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625739221340680,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: aef2b9c920bd8998bd8f0b63747752dd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b,PodSandboxId:37d220d03ff98c32e8150017bc155aae33fc8cc0a551400e287958d263b84f70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625739177321487,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 85585af84dc6cf60f33336c0a1c5a11f,},Annotations:map[string]string{io.kubernetes.container.hash: 90b31d22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862,PodSandboxId:1835d921c3e05def4cdc131d68f2cbdd34f27229844719a02a01ea4f9bd5cbee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625739152392152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e049b2796061913144bf89c1454f5
f9,},Annotations:map[string]string{io.kubernetes.container.hash: fafef5fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b77662b7-142c-452e-938a-9948dbe2e8e3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.393309497Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61f49ebc-a2da-4182-b476-19b84d640693 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.393468352Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61f49ebc-a2da-4182-b476-19b84d640693 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.394562266Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=85809299-545a-48fe-ac44-f3d5386a0939 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.395045381Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626554395019725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=85809299-545a-48fe-ac44-f3d5386a0939 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.395617204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b025f77-c898-451a-8e8e-377ab482b5cd name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.395726502Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b025f77-c898-451a-8e8e-377ab482b5cd name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.395971721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfd335e5e905ceb4a84958b887f1f87c485fa58b5c2410528667b4584437377d,PodSandboxId:4ec5e51e33e3cedc6aefb9c3ee5d6391210baed29b05fc84acc385a62d4ad61f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718625759784458039,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30d10d01-c1de-435f-902e-5e90c86ab3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6d8e5583,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323,PodSandboxId:8a1b06c7196ef98910e1fd1444bc7cfe4dc58d4a078332029874c3879df5045b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718625758616393839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnw24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6c4ff3-f0dc-43da-abd8-baaed7dca40c,},Annotations:map[string]string{io.kubernetes.container.hash: a431f7a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195,PodSandboxId:e961aee4065637077a8ce4e59e5627f0c51458c18464ffd5d60b15f46a7b95aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718625743613384802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 92b20aec-29c2-4256-86be-7f58f66585dd,},Annotations:map[string]string{io.kubernetes.container.hash: 5155bfb6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da,PodSandboxId:0a7b4f113755c29d14cf67df0a593ef5c83b50b92ed3fa26a93a3fe94024b925,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718625742911563496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn5kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6935148-7
ee8-4655-8327-9f1ee4c933de,},Annotations:map[string]string{io.kubernetes.container.hash: ebf4cc3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc,PodSandboxId:e961aee4065637077a8ce4e59e5627f0c51458c18464ffd5d60b15f46a7b95aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718625742905247251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92b20aec-29c2-4256-86be
-7f58f66585dd,},Annotations:map[string]string{io.kubernetes.container.hash: 5155bfb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b,PodSandboxId:a3cec7d877da2c73dcc9614f367bf8f5a3f7d0a1d73be53db582ceb404b2d8d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625739247357549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c21bea80d5b9dcade35da
7b7545e61c7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685,PodSandboxId:8753042e3940c09ad40880a7040acf9ff18b04ea81902bfc864efb03cc277e8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625739221340680,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: aef2b9c920bd8998bd8f0b63747752dd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b,PodSandboxId:37d220d03ff98c32e8150017bc155aae33fc8cc0a551400e287958d263b84f70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625739177321487,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 85585af84dc6cf60f33336c0a1c5a11f,},Annotations:map[string]string{io.kubernetes.container.hash: 90b31d22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862,PodSandboxId:1835d921c3e05def4cdc131d68f2cbdd34f27229844719a02a01ea4f9bd5cbee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625739152392152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e049b2796061913144bf89c1454f5
f9,},Annotations:map[string]string{io.kubernetes.container.hash: fafef5fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b025f77-c898-451a-8e8e-377ab482b5cd name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.433829138Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a384fb6a-2e33-4032-91cf-699ba6436c40 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.433917744Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a384fb6a-2e33-4032-91cf-699ba6436c40 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.435082370Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3b81d728-c440-4481-be87-5089024ed25f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.435629168Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626554435608556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b81d728-c440-4481-be87-5089024ed25f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.436426352Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9dd60b9-527b-49e5-9db6-8b12de76101b name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.436489818Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9dd60b9-527b-49e5-9db6-8b12de76101b name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:15:54 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:15:54.436689726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfd335e5e905ceb4a84958b887f1f87c485fa58b5c2410528667b4584437377d,PodSandboxId:4ec5e51e33e3cedc6aefb9c3ee5d6391210baed29b05fc84acc385a62d4ad61f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718625759784458039,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30d10d01-c1de-435f-902e-5e90c86ab3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6d8e5583,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323,PodSandboxId:8a1b06c7196ef98910e1fd1444bc7cfe4dc58d4a078332029874c3879df5045b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718625758616393839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnw24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6c4ff3-f0dc-43da-abd8-baaed7dca40c,},Annotations:map[string]string{io.kubernetes.container.hash: a431f7a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195,PodSandboxId:e961aee4065637077a8ce4e59e5627f0c51458c18464ffd5d60b15f46a7b95aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718625743613384802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 92b20aec-29c2-4256-86be-7f58f66585dd,},Annotations:map[string]string{io.kubernetes.container.hash: 5155bfb6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da,PodSandboxId:0a7b4f113755c29d14cf67df0a593ef5c83b50b92ed3fa26a93a3fe94024b925,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718625742911563496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn5kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6935148-7
ee8-4655-8327-9f1ee4c933de,},Annotations:map[string]string{io.kubernetes.container.hash: ebf4cc3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc,PodSandboxId:e961aee4065637077a8ce4e59e5627f0c51458c18464ffd5d60b15f46a7b95aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718625742905247251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92b20aec-29c2-4256-86be
-7f58f66585dd,},Annotations:map[string]string{io.kubernetes.container.hash: 5155bfb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b,PodSandboxId:a3cec7d877da2c73dcc9614f367bf8f5a3f7d0a1d73be53db582ceb404b2d8d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625739247357549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c21bea80d5b9dcade35da
7b7545e61c7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685,PodSandboxId:8753042e3940c09ad40880a7040acf9ff18b04ea81902bfc864efb03cc277e8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625739221340680,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: aef2b9c920bd8998bd8f0b63747752dd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b,PodSandboxId:37d220d03ff98c32e8150017bc155aae33fc8cc0a551400e287958d263b84f70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625739177321487,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 85585af84dc6cf60f33336c0a1c5a11f,},Annotations:map[string]string{io.kubernetes.container.hash: 90b31d22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862,PodSandboxId:1835d921c3e05def4cdc131d68f2cbdd34f27229844719a02a01ea4f9bd5cbee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625739152392152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e049b2796061913144bf89c1454f5
f9,},Annotations:map[string]string{io.kubernetes.container.hash: fafef5fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9dd60b9-527b-49e5-9db6-8b12de76101b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dfd335e5e905c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   4ec5e51e33e3c       busybox
	26b8e036867db       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   8a1b06c7196ef       coredns-7db6d8ff4d-mnw24
	adb0f4294c844       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       3                   e961aee406563       storage-provisioner
	63dba5e023e5a       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      13 minutes ago      Running             kube-proxy                1                   0a7b4f113755c       kube-proxy-jn5kp
	e1a38df1bc100       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   e961aee406563       storage-provisioner
	2fc9bd2867376       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      13 minutes ago      Running             kube-scheduler            1                   a3cec7d877da2       kube-scheduler-default-k8s-diff-port-991309
	36ad2102b1a13       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      13 minutes ago      Running             kube-controller-manager   1                   8753042e3940c       kube-controller-manager-default-k8s-diff-port-991309
	5b11bf1d6c96b       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      13 minutes ago      Running             kube-apiserver            1                   37d220d03ff98       kube-apiserver-default-k8s-diff-port-991309
	8bfeb1ae74a6b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   1835d921c3e05       etcd-default-k8s-diff-port-991309
	
	
	==> coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58862 - 59428 "HINFO IN 3347879279322849397.3803459997896774640. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01204344s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-991309
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-991309
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=default-k8s-diff-port-991309
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T11_56_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:56:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-991309
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 12:15:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 12:13:05 +0000   Mon, 17 Jun 2024 11:56:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 12:13:05 +0000   Mon, 17 Jun 2024 11:56:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 12:13:05 +0000   Mon, 17 Jun 2024 11:56:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 12:13:05 +0000   Mon, 17 Jun 2024 12:02:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.125
	  Hostname:    default-k8s-diff-port-991309
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d6f992fe6fb94accb2f426c01d5d0f61
	  System UUID:                d6f992fe-6fb9-4acc-b2f4-26c01d5d0f61
	  Boot ID:                    3ae063a7-6d55-4793-bbc5-8b4530650f29
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-mnw24                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     19m
	  kube-system                 etcd-default-k8s-diff-port-991309                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-991309             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-991309    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-jn5kp                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-default-k8s-diff-port-991309             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 metrics-server-569cc877fc-n2svp                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-991309 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-991309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-991309 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeReady                19m                kubelet          Node default-k8s-diff-port-991309 status is now: NodeReady
	  Normal  RegisteredNode           19m                node-controller  Node default-k8s-diff-port-991309 event: Registered Node default-k8s-diff-port-991309 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-991309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-991309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-991309 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-991309 event: Registered Node default-k8s-diff-port-991309 in Controller
	
	
	==> dmesg <==
	[Jun17 12:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051927] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044623] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.893865] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jun17 12:02] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.630374] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.239911] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.069500] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061337] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.218912] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.143389] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.293601] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +4.533291] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
	[  +0.055286] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.205283] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +4.635719] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.397905] systemd-fstab-generator[1611]: Ignoring "noauto" option for root device
	[  +5.303183] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.361973] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] <==
	{"level":"info","ts":"2024-06-17T12:02:20.673083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 received MsgVoteResp from 8e41abb37b207023 at term 3"}
	{"level":"info","ts":"2024-06-17T12:02:20.673164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became leader at term 3"}
	{"level":"info","ts":"2024-06-17T12:02:20.673198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8e41abb37b207023 elected leader 8e41abb37b207023 at term 3"}
	{"level":"info","ts":"2024-06-17T12:02:20.677508Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8e41abb37b207023","local-member-attributes":"{Name:default-k8s-diff-port-991309 ClientURLs:[https://192.168.50.125:2379]}","request-path":"/0/members/8e41abb37b207023/attributes","cluster-id":"40e9c4986db8cbc5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-17T12:02:20.677598Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T12:02:20.678091Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T12:02:20.679854Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.125:2379"}
	{"level":"info","ts":"2024-06-17T12:02:20.68147Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-17T12:02:20.681554Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-17T12:02:20.681796Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-17T12:02:35.729241Z","caller":"traceutil/trace.go:171","msg":"trace[975612873] linearizableReadLoop","detail":"{readStateIndex:586; appliedIndex:585; }","duration":"219.482128ms","start":"2024-06-17T12:02:35.509735Z","end":"2024-06-17T12:02:35.729217Z","steps":["trace[975612873] 'read index received'  (duration: 218.581085ms)","trace[975612873] 'applied index is now lower than readState.Index'  (duration: 900.133µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-17T12:02:35.729394Z","caller":"traceutil/trace.go:171","msg":"trace[871675077] transaction","detail":"{read_only:false; response_revision:550; number_of_response:1; }","duration":"264.527479ms","start":"2024-06-17T12:02:35.464859Z","end":"2024-06-17T12:02:35.729387Z","steps":["trace[871675077] 'process raft request'  (duration: 263.356217ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T12:02:35.729567Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.714879ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-06-17T12:02:35.730332Z","caller":"traceutil/trace.go:171","msg":"trace[1309255005] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:550; }","duration":"170.547281ms","start":"2024-06-17T12:02:35.559772Z","end":"2024-06-17T12:02:35.730319Z","steps":["trace[1309255005] 'agreement among raft nodes before linearized reading'  (duration: 169.672296ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T12:02:35.729634Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.913289ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2024-06-17T12:02:35.730495Z","caller":"traceutil/trace.go:171","msg":"trace[949012262] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:550; }","duration":"220.772121ms","start":"2024-06-17T12:02:35.509715Z","end":"2024-06-17T12:02:35.730487Z","steps":["trace[949012262] 'agreement among raft nodes before linearized reading'  (duration: 219.892026ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T12:02:35.72966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.400115ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-991309\" ","response":"range_response_count:1 size:6556"}
	{"level":"info","ts":"2024-06-17T12:02:35.730561Z","caller":"traceutil/trace.go:171","msg":"trace[415054284] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-991309; range_end:; response_count:1; response_revision:550; }","duration":"103.322253ms","start":"2024-06-17T12:02:35.627231Z","end":"2024-06-17T12:02:35.730554Z","steps":["trace[415054284] 'agreement among raft nodes before linearized reading'  (duration: 102.4118ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T12:02:35.867448Z","caller":"traceutil/trace.go:171","msg":"trace[1823358678] transaction","detail":"{read_only:false; response_revision:551; number_of_response:1; }","duration":"126.738113ms","start":"2024-06-17T12:02:35.740693Z","end":"2024-06-17T12:02:35.867431Z","steps":["trace[1823358678] 'process raft request'  (duration: 118.603601ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T12:02:35.868316Z","caller":"traceutil/trace.go:171","msg":"trace[1600659615] transaction","detail":"{read_only:false; response_revision:552; number_of_response:1; }","duration":"127.281233ms","start":"2024-06-17T12:02:35.741025Z","end":"2024-06-17T12:02:35.868307Z","steps":["trace[1600659615] 'process raft request'  (duration: 127.00755ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T12:02:35.868887Z","caller":"traceutil/trace.go:171","msg":"trace[2015809271] transaction","detail":"{read_only:false; response_revision:553; number_of_response:1; }","duration":"127.176168ms","start":"2024-06-17T12:02:35.741692Z","end":"2024-06-17T12:02:35.868868Z","steps":["trace[2015809271] 'process raft request'  (duration: 126.392157ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T12:02:35.869056Z","caller":"traceutil/trace.go:171","msg":"trace[96738467] transaction","detail":"{read_only:false; response_revision:554; number_of_response:1; }","duration":"127.24506ms","start":"2024-06-17T12:02:35.741791Z","end":"2024-06-17T12:02:35.869036Z","steps":["trace[96738467] 'process raft request'  (duration: 126.367146ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T12:12:20.726824Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":824}
	{"level":"info","ts":"2024-06-17T12:12:20.736976Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":824,"took":"9.765675ms","hash":2281465834,"current-db-size-bytes":2592768,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2592768,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-06-17T12:12:20.737031Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2281465834,"revision":824,"compact-revision":-1}
	
	
	==> kernel <==
	 12:15:54 up 14 min,  0 users,  load average: 0.23, 0.19, 0.15
	Linux default-k8s-diff-port-991309 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] <==
	I0617 12:10:23.272170       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:12:22.270938       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:12:22.271066       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0617 12:12:23.271870       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:12:23.271921       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0617 12:12:23.271932       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:12:23.271971       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:12:23.272015       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:12:23.273200       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:13:23.272197       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:13:23.272336       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0617 12:13:23.272368       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:13:23.273400       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:13:23.273481       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:13:23.273489       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:15:23.273084       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:15:23.273398       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0617 12:15:23.273438       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:15:23.274453       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:15:23.274515       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:15:23.274541       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] <==
	I0617 12:10:05.949240       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:10:35.456375       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:10:35.956740       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:11:05.461402       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:11:05.965194       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:11:35.466314       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:11:35.973433       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:12:05.472227       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:12:05.981867       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:12:35.477572       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:12:35.989441       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:13:05.482452       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:13:05.996871       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0617 12:13:34.537461       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="462.775µs"
	E0617 12:13:35.487895       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:13:36.005868       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0617 12:13:49.536014       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="140.596µs"
	E0617 12:14:05.493970       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:14:06.013889       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:14:35.500308       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:14:36.021467       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:15:05.504843       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:15:06.029320       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:15:35.510820       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:15:36.041881       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] <==
	I0617 12:02:23.105039       1 server_linux.go:69] "Using iptables proxy"
	I0617 12:02:23.116277       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.125"]
	I0617 12:02:23.173374       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0617 12:02:23.173438       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0617 12:02:23.173463       1 server_linux.go:165] "Using iptables Proxier"
	I0617 12:02:23.182794       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 12:02:23.183020       1 server.go:872] "Version info" version="v1.30.1"
	I0617 12:02:23.183064       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 12:02:23.186035       1 config.go:192] "Starting service config controller"
	I0617 12:02:23.187924       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 12:02:23.187997       1 config.go:101] "Starting endpoint slice config controller"
	I0617 12:02:23.188027       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0617 12:02:23.189981       1 config.go:319] "Starting node config controller"
	I0617 12:02:23.190013       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 12:02:23.288176       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0617 12:02:23.290309       1 shared_informer.go:320] Caches are synced for node config
	I0617 12:02:23.291590       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] <==
	W0617 12:02:22.248006       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0617 12:02:22.248017       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0617 12:02:22.248150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0617 12:02:22.248182       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0617 12:02:22.248271       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0617 12:02:22.248299       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0617 12:02:22.248359       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0617 12:02:22.248368       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0617 12:02:22.248528       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0617 12:02:22.248557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0617 12:02:22.248605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0617 12:02:22.248631       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0617 12:02:22.248684       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0617 12:02:22.248729       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0617 12:02:22.248771       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0617 12:02:22.248780       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0617 12:02:22.248851       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0617 12:02:22.248879       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0617 12:02:22.248922       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0617 12:02:22.248931       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0617 12:02:22.249029       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0617 12:02:22.249056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0617 12:02:22.249066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0617 12:02:22.249073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0617 12:02:23.640019       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 17 12:13:20 default-k8s-diff-port-991309 kubelet[942]: E0617 12:13:20.535608     942 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 17 12:13:20 default-k8s-diff-port-991309 kubelet[942]: E0617 12:13:20.536068     942 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 17 12:13:20 default-k8s-diff-port-991309 kubelet[942]: E0617 12:13:20.536714     942 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6q7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-n2svp_kube-system(5b637d97-3183-4324-98cf-dd69a2968578): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jun 17 12:13:20 default-k8s-diff-port-991309 kubelet[942]: E0617 12:13:20.536824     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:13:34 default-k8s-diff-port-991309 kubelet[942]: E0617 12:13:34.521206     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:13:49 default-k8s-diff-port-991309 kubelet[942]: E0617 12:13:49.520009     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:14:02 default-k8s-diff-port-991309 kubelet[942]: E0617 12:14:02.523376     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:14:13 default-k8s-diff-port-991309 kubelet[942]: E0617 12:14:13.519792     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:14:18 default-k8s-diff-port-991309 kubelet[942]: E0617 12:14:18.544269     942 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 12:14:18 default-k8s-diff-port-991309 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 12:14:18 default-k8s-diff-port-991309 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 12:14:18 default-k8s-diff-port-991309 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 12:14:18 default-k8s-diff-port-991309 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 12:14:24 default-k8s-diff-port-991309 kubelet[942]: E0617 12:14:24.521203     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:14:39 default-k8s-diff-port-991309 kubelet[942]: E0617 12:14:39.523510     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:14:51 default-k8s-diff-port-991309 kubelet[942]: E0617 12:14:51.521160     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:15:02 default-k8s-diff-port-991309 kubelet[942]: E0617 12:15:02.520959     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:15:16 default-k8s-diff-port-991309 kubelet[942]: E0617 12:15:16.520242     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:15:18 default-k8s-diff-port-991309 kubelet[942]: E0617 12:15:18.543948     942 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 12:15:18 default-k8s-diff-port-991309 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 12:15:18 default-k8s-diff-port-991309 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 12:15:18 default-k8s-diff-port-991309 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 12:15:18 default-k8s-diff-port-991309 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 12:15:27 default-k8s-diff-port-991309 kubelet[942]: E0617 12:15:27.520001     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:15:41 default-k8s-diff-port-991309 kubelet[942]: E0617 12:15:41.520432     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	
	
	==> storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] <==
	I0617 12:02:23.766191       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0617 12:02:23.788529       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0617 12:02:23.788663       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0617 12:02:41.197444       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0617 12:02:41.197882       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-991309_b740b017-e355-4f30-9689-7fc73a80f89b!
	I0617 12:02:41.198276       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c19e179d-dfa7-4034-ad1a-2148d11b33bc", APIVersion:"v1", ResourceVersion:"573", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-991309_b740b017-e355-4f30-9689-7fc73a80f89b became leader
	I0617 12:02:41.301257       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-991309_b740b017-e355-4f30-9689-7fc73a80f89b!
	
	
	==> storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] <==
	I0617 12:02:23.045302       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0617 12:02:23.047867       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-991309 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-n2svp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-991309 describe pod metrics-server-569cc877fc-n2svp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-991309 describe pod metrics-server-569cc877fc-n2svp: exit status 1 (63.229986ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-n2svp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-991309 describe pod metrics-server-569cc877fc-n2svp: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0617 12:08:57.398033  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-152830 -n no-preload-152830
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-06-17 12:17:05.694690029 +0000 UTC m=+5571.538248166
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152830 -n no-preload-152830
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-152830 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-152830 logs -n 25: (2.098418497s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-514753                              | cert-expiration-514753       | jenkins | v1.33.1 | 17 Jun 24 11:52 UTC | 17 Jun 24 11:52 UTC |
	| start   | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:52 UTC | 17 Jun 24 11:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-152830             | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-152830                                   | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-136195            | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:55 UTC |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	| delete  | -p                                                     | disable-driver-mounts-960277 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | disable-driver-mounts-960277                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:56 UTC |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-152830                  | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-152830                                   | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC | 17 Jun 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-136195                 | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-003661        | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC | 17 Jun 24 12:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-991309  | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:57 UTC | 17 Jun 24 11:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:57 UTC |                     |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-003661                              | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC | 17 Jun 24 11:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-003661             | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC | 17 Jun 24 11:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-003661                              | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-991309       | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:59 UTC | 17 Jun 24 12:06 UTC |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 11:59:37
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 11:59:37.428028  166103 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:59:37.428266  166103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:59:37.428274  166103 out.go:304] Setting ErrFile to fd 2...
	I0617 11:59:37.428279  166103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:59:37.428472  166103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:59:37.429026  166103 out.go:298] Setting JSON to false
	I0617 11:59:37.429968  166103 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6124,"bootTime":1718619453,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:59:37.430026  166103 start.go:139] virtualization: kvm guest
	I0617 11:59:37.432171  166103 out.go:177] * [default-k8s-diff-port-991309] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:59:37.433521  166103 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:59:37.433548  166103 notify.go:220] Checking for updates...
	I0617 11:59:37.434850  166103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:59:37.436099  166103 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:59:37.437362  166103 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:59:37.438535  166103 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:59:37.439644  166103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:59:37.441113  166103 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:59:37.441563  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:59:37.441645  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:59:37.456875  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45565
	I0617 11:59:37.457306  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:59:37.457839  166103 main.go:141] libmachine: Using API Version  1
	I0617 11:59:37.457861  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:59:37.458188  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:59:37.458381  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 11:59:37.458626  166103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:59:37.458927  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:59:37.458971  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:59:37.474024  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45165
	I0617 11:59:37.474411  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:59:37.474873  166103 main.go:141] libmachine: Using API Version  1
	I0617 11:59:37.474899  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:59:37.475199  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:59:37.475383  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 11:59:37.507955  166103 out.go:177] * Using the kvm2 driver based on existing profile
	I0617 11:59:37.509134  166103 start.go:297] selected driver: kvm2
	I0617 11:59:37.509148  166103 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:59:37.509249  166103 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:59:37.509927  166103 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:59:37.510004  166103 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 11:59:37.525340  166103 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 11:59:37.525701  166103 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:59:37.525761  166103 cni.go:84] Creating CNI manager for ""
	I0617 11:59:37.525779  166103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:59:37.525812  166103 start.go:340] cluster config:
	{Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:59:37.525910  166103 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:59:37.527756  166103 out.go:177] * Starting "default-k8s-diff-port-991309" primary control-plane node in "default-k8s-diff-port-991309" cluster
	I0617 11:59:36.391800  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:37.529104  166103 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:59:37.529159  166103 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 11:59:37.529171  166103 cache.go:56] Caching tarball of preloaded images
	I0617 11:59:37.529246  166103 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:59:37.529256  166103 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 11:59:37.529368  166103 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/config.json ...
	I0617 11:59:37.529565  166103 start.go:360] acquireMachinesLock for default-k8s-diff-port-991309: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:59:42.471684  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:45.543735  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:51.623725  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:54.695811  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:00.775775  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:03.847736  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:09.927768  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:12.999728  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:19.079809  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:22.151737  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:28.231763  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:31.303775  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:37.383783  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:40.455809  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:46.535757  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:49.607769  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:55.687772  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:58.759722  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:01:04.839736  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:01:07.911780  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:01:10.916735  165060 start.go:364] duration metric: took 4m27.471308215s to acquireMachinesLock for "embed-certs-136195"
	I0617 12:01:10.916814  165060 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:10.916827  165060 fix.go:54] fixHost starting: 
	I0617 12:01:10.917166  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:10.917203  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:10.932217  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43235
	I0617 12:01:10.932742  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:10.933241  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:10.933261  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:10.933561  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:10.933766  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:10.933939  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:10.935452  165060 fix.go:112] recreateIfNeeded on embed-certs-136195: state=Stopped err=<nil>
	I0617 12:01:10.935660  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	W0617 12:01:10.935831  165060 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:10.937510  165060 out.go:177] * Restarting existing kvm2 VM for "embed-certs-136195" ...
	I0617 12:01:10.938708  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Start
	I0617 12:01:10.938873  165060 main.go:141] libmachine: (embed-certs-136195) Ensuring networks are active...
	I0617 12:01:10.939602  165060 main.go:141] libmachine: (embed-certs-136195) Ensuring network default is active
	I0617 12:01:10.939896  165060 main.go:141] libmachine: (embed-certs-136195) Ensuring network mk-embed-certs-136195 is active
	I0617 12:01:10.940260  165060 main.go:141] libmachine: (embed-certs-136195) Getting domain xml...
	I0617 12:01:10.940881  165060 main.go:141] libmachine: (embed-certs-136195) Creating domain...
	I0617 12:01:12.136267  165060 main.go:141] libmachine: (embed-certs-136195) Waiting to get IP...
	I0617 12:01:12.137303  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:12.137692  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:12.137777  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:12.137684  166451 retry.go:31] will retry after 261.567272ms: waiting for machine to come up
	I0617 12:01:12.401390  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:12.401845  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:12.401873  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:12.401816  166451 retry.go:31] will retry after 332.256849ms: waiting for machine to come up
	I0617 12:01:12.735421  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:12.735842  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:12.735872  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:12.735783  166451 retry.go:31] will retry after 457.313241ms: waiting for machine to come up
	I0617 12:01:13.194621  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:13.195073  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:13.195091  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:13.195036  166451 retry.go:31] will retry after 539.191177ms: waiting for machine to come up
	I0617 12:01:10.914315  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:10.914353  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:01:10.914690  164809 buildroot.go:166] provisioning hostname "no-preload-152830"
	I0617 12:01:10.914716  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:01:10.914905  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:01:10.916557  164809 machine.go:97] duration metric: took 4m37.418351206s to provisionDockerMachine
	I0617 12:01:10.916625  164809 fix.go:56] duration metric: took 4m37.438694299s for fixHost
	I0617 12:01:10.916634  164809 start.go:83] releasing machines lock for "no-preload-152830", held for 4m37.438726092s
	W0617 12:01:10.916653  164809 start.go:713] error starting host: provision: host is not running
	W0617 12:01:10.916750  164809 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0617 12:01:10.916763  164809 start.go:728] Will try again in 5 seconds ...
	I0617 12:01:13.735708  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:13.736155  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:13.736184  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:13.736096  166451 retry.go:31] will retry after 754.965394ms: waiting for machine to come up
	I0617 12:01:14.493211  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:14.493598  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:14.493628  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:14.493544  166451 retry.go:31] will retry after 786.125188ms: waiting for machine to come up
	I0617 12:01:15.281505  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:15.281975  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:15.282008  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:15.281939  166451 retry.go:31] will retry after 1.091514617s: waiting for machine to come up
	I0617 12:01:16.375391  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:16.375904  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:16.375935  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:16.375820  166451 retry.go:31] will retry after 1.34601641s: waiting for machine to come up
	I0617 12:01:17.724108  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:17.724453  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:17.724477  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:17.724418  166451 retry.go:31] will retry after 1.337616605s: waiting for machine to come up
	I0617 12:01:15.918256  164809 start.go:360] acquireMachinesLock for no-preload-152830: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 12:01:19.063677  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:19.064210  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:19.064243  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:19.064144  166451 retry.go:31] will retry after 1.914267639s: waiting for machine to come up
	I0617 12:01:20.979644  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:20.980124  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:20.980150  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:20.980072  166451 retry.go:31] will retry after 2.343856865s: waiting for machine to come up
	I0617 12:01:23.326506  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:23.326878  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:23.326922  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:23.326861  166451 retry.go:31] will retry after 2.450231017s: waiting for machine to come up
	I0617 12:01:25.780501  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:25.780886  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:25.780913  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:25.780825  166451 retry.go:31] will retry after 3.591107926s: waiting for machine to come up
	I0617 12:01:30.728529  165698 start.go:364] duration metric: took 3m12.647041864s to acquireMachinesLock for "old-k8s-version-003661"
	I0617 12:01:30.728602  165698 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:30.728613  165698 fix.go:54] fixHost starting: 
	I0617 12:01:30.729036  165698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:30.729090  165698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:30.746528  165698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0617 12:01:30.746982  165698 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:30.747493  165698 main.go:141] libmachine: Using API Version  1
	I0617 12:01:30.747516  165698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:30.747847  165698 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:30.748060  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:30.748186  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetState
	I0617 12:01:30.750035  165698 fix.go:112] recreateIfNeeded on old-k8s-version-003661: state=Stopped err=<nil>
	I0617 12:01:30.750072  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	W0617 12:01:30.750206  165698 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:30.752196  165698 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-003661" ...
	I0617 12:01:29.375875  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.376372  165060 main.go:141] libmachine: (embed-certs-136195) Found IP for machine: 192.168.72.199
	I0617 12:01:29.376407  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has current primary IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.376430  165060 main.go:141] libmachine: (embed-certs-136195) Reserving static IP address...
	I0617 12:01:29.376754  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "embed-certs-136195", mac: "52:54:00:f2:27:84", ip: "192.168.72.199"} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.376788  165060 main.go:141] libmachine: (embed-certs-136195) Reserved static IP address: 192.168.72.199
	I0617 12:01:29.376800  165060 main.go:141] libmachine: (embed-certs-136195) DBG | skip adding static IP to network mk-embed-certs-136195 - found existing host DHCP lease matching {name: "embed-certs-136195", mac: "52:54:00:f2:27:84", ip: "192.168.72.199"}
	I0617 12:01:29.376811  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Getting to WaitForSSH function...
	I0617 12:01:29.376820  165060 main.go:141] libmachine: (embed-certs-136195) Waiting for SSH to be available...
	I0617 12:01:29.378811  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.379121  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.379151  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.379289  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Using SSH client type: external
	I0617 12:01:29.379321  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa (-rw-------)
	I0617 12:01:29.379354  165060 main.go:141] libmachine: (embed-certs-136195) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.199 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:01:29.379368  165060 main.go:141] libmachine: (embed-certs-136195) DBG | About to run SSH command:
	I0617 12:01:29.379381  165060 main.go:141] libmachine: (embed-certs-136195) DBG | exit 0
	I0617 12:01:29.503819  165060 main.go:141] libmachine: (embed-certs-136195) DBG | SSH cmd err, output: <nil>: 
	I0617 12:01:29.504207  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetConfigRaw
	I0617 12:01:29.504827  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:29.507277  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.507601  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.507635  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.507878  165060 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/config.json ...
	I0617 12:01:29.508102  165060 machine.go:94] provisionDockerMachine start ...
	I0617 12:01:29.508125  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:29.508333  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.510390  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.510636  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.510656  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.510761  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:29.510924  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.511082  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.511242  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:29.511404  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:29.511665  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:29.511680  165060 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:01:29.611728  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:01:29.611759  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetMachineName
	I0617 12:01:29.611996  165060 buildroot.go:166] provisioning hostname "embed-certs-136195"
	I0617 12:01:29.612025  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetMachineName
	I0617 12:01:29.612194  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.614719  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.615085  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.615110  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.615251  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:29.615425  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.615565  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.615685  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:29.615881  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:29.616066  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:29.616084  165060 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-136195 && echo "embed-certs-136195" | sudo tee /etc/hostname
	I0617 12:01:29.729321  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-136195
	
	I0617 12:01:29.729347  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.731968  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.732314  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.732352  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.732582  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:29.732820  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.733001  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.733157  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:29.733312  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:29.733471  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:29.733487  165060 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-136195' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-136195/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-136195' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:01:29.840083  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:29.840110  165060 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:01:29.840145  165060 buildroot.go:174] setting up certificates
	I0617 12:01:29.840180  165060 provision.go:84] configureAuth start
	I0617 12:01:29.840199  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetMachineName
	I0617 12:01:29.840488  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:29.843096  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.843446  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.843487  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.843687  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.845627  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.845914  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.845940  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.846021  165060 provision.go:143] copyHostCerts
	I0617 12:01:29.846096  165060 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:01:29.846106  165060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:01:29.846171  165060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:01:29.846267  165060 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:01:29.846275  165060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:01:29.846298  165060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:01:29.846359  165060 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:01:29.846366  165060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:01:29.846387  165060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:01:29.846456  165060 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.embed-certs-136195 san=[127.0.0.1 192.168.72.199 embed-certs-136195 localhost minikube]
	I0617 12:01:30.076596  165060 provision.go:177] copyRemoteCerts
	I0617 12:01:30.076657  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:01:30.076686  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.079269  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.079565  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.079588  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.079785  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.080016  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.080189  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.080316  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.161615  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:01:30.188790  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0617 12:01:30.215171  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:01:30.241310  165060 provision.go:87] duration metric: took 401.115469ms to configureAuth
	I0617 12:01:30.241332  165060 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:01:30.241529  165060 config.go:182] Loaded profile config "embed-certs-136195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:01:30.241602  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.244123  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.244427  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.244459  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.244584  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.244793  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.244999  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.245174  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.245340  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:30.245497  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:30.245512  165060 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:01:30.498156  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:01:30.498189  165060 machine.go:97] duration metric: took 990.071076ms to provisionDockerMachine
	I0617 12:01:30.498201  165060 start.go:293] postStartSetup for "embed-certs-136195" (driver="kvm2")
	I0617 12:01:30.498214  165060 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:01:30.498238  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.498580  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:01:30.498605  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.501527  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.501912  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.501941  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.502054  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.502257  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.502423  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.502578  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.583151  165060 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:01:30.587698  165060 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:01:30.587722  165060 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:01:30.587819  165060 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:01:30.587940  165060 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:01:30.588078  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:01:30.598234  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:30.622580  165060 start.go:296] duration metric: took 124.363651ms for postStartSetup
	I0617 12:01:30.622621  165060 fix.go:56] duration metric: took 19.705796191s for fixHost
	I0617 12:01:30.622645  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.625226  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.625637  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.625684  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.625821  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.626040  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.626229  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.626418  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.626613  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:30.626839  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:30.626862  165060 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:01:30.728365  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625690.704643527
	
	I0617 12:01:30.728389  165060 fix.go:216] guest clock: 1718625690.704643527
	I0617 12:01:30.728396  165060 fix.go:229] Guest: 2024-06-17 12:01:30.704643527 +0000 UTC Remote: 2024-06-17 12:01:30.622625631 +0000 UTC m=+287.310804086 (delta=82.017896ms)
	I0617 12:01:30.728416  165060 fix.go:200] guest clock delta is within tolerance: 82.017896ms
	I0617 12:01:30.728421  165060 start.go:83] releasing machines lock for "embed-certs-136195", held for 19.811634749s
	I0617 12:01:30.728445  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.728763  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:30.731414  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.731784  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.731816  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.731937  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.732504  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.732704  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.732761  165060 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:01:30.732826  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.732964  165060 ssh_runner.go:195] Run: cat /version.json
	I0617 12:01:30.732991  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.735854  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736049  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736278  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.736310  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.736334  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736397  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736579  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.736653  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.736777  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.736959  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.736972  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.737131  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.737188  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.737356  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.844295  165060 ssh_runner.go:195] Run: systemctl --version
	I0617 12:01:30.851958  165060 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:01:31.000226  165060 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:01:31.008322  165060 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:01:31.008397  165060 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:01:31.029520  165060 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:01:31.029547  165060 start.go:494] detecting cgroup driver to use...
	I0617 12:01:31.029617  165060 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:01:31.045505  165060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:01:31.059851  165060 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:01:31.059920  165060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:01:31.075011  165060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:01:31.089705  165060 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:01:31.204300  165060 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:01:31.342204  165060 docker.go:233] disabling docker service ...
	I0617 12:01:31.342290  165060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:01:31.356945  165060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:01:31.369786  165060 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:01:31.505817  165060 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:01:31.631347  165060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:01:31.646048  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:01:31.664854  165060 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:01:31.664923  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.677595  165060 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:01:31.677678  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.690164  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.701482  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.712488  165060 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:01:31.723994  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.736805  165060 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.755001  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.767226  165060 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:01:31.777894  165060 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:01:31.777954  165060 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:01:31.792644  165060 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:01:31.803267  165060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:31.920107  165060 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:01:32.067833  165060 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:01:32.067904  165060 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:01:32.072818  165060 start.go:562] Will wait 60s for crictl version
	I0617 12:01:32.072881  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:01:32.076782  165060 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:01:32.116635  165060 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:01:32.116709  165060 ssh_runner.go:195] Run: crio --version
	I0617 12:01:32.148094  165060 ssh_runner.go:195] Run: crio --version
	I0617 12:01:32.176924  165060 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:01:30.753437  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .Start
	I0617 12:01:30.753608  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring networks are active...
	I0617 12:01:30.754272  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring network default is active
	I0617 12:01:30.754600  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring network mk-old-k8s-version-003661 is active
	I0617 12:01:30.754967  165698 main.go:141] libmachine: (old-k8s-version-003661) Getting domain xml...
	I0617 12:01:30.755739  165698 main.go:141] libmachine: (old-k8s-version-003661) Creating domain...
	I0617 12:01:32.029080  165698 main.go:141] libmachine: (old-k8s-version-003661) Waiting to get IP...
	I0617 12:01:32.029902  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.030401  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.030477  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.030384  166594 retry.go:31] will retry after 191.846663ms: waiting for machine to come up
	I0617 12:01:32.223912  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.224300  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.224328  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.224276  166594 retry.go:31] will retry after 341.806498ms: waiting for machine to come up
	I0617 12:01:32.568066  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.568648  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.568682  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.568575  166594 retry.go:31] will retry after 359.779948ms: waiting for machine to come up
	I0617 12:01:32.930210  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.930652  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.930675  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.930604  166594 retry.go:31] will retry after 548.549499ms: waiting for machine to come up
	I0617 12:01:32.178076  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:32.181127  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:32.181524  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:32.181553  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:32.181778  165060 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0617 12:01:32.186998  165060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:32.203033  165060 kubeadm.go:877] updating cluster {Name:embed-certs-136195 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-136195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:01:32.203142  165060 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:01:32.203183  165060 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:32.245712  165060 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:01:32.245796  165060 ssh_runner.go:195] Run: which lz4
	I0617 12:01:32.250113  165060 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:01:32.254486  165060 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:01:32.254511  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0617 12:01:33.480493  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:33.480965  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:33.481004  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:33.480931  166594 retry.go:31] will retry after 636.044066ms: waiting for machine to come up
	I0617 12:01:34.118880  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:34.119361  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:34.119394  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:34.119299  166594 retry.go:31] will retry after 637.085777ms: waiting for machine to come up
	I0617 12:01:34.757614  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:34.758097  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:34.758126  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:34.758051  166594 retry.go:31] will retry after 921.652093ms: waiting for machine to come up
	I0617 12:01:35.681846  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:35.682324  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:35.682351  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:35.682269  166594 retry.go:31] will retry after 1.1106801s: waiting for machine to come up
	I0617 12:01:36.794411  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:36.794845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:36.794869  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:36.794793  166594 retry.go:31] will retry after 1.323395845s: waiting for machine to come up
	I0617 12:01:33.776867  165060 crio.go:462] duration metric: took 1.526763522s to copy over tarball
	I0617 12:01:33.776955  165060 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:01:35.994216  165060 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.217222149s)
	I0617 12:01:35.994246  165060 crio.go:469] duration metric: took 2.217348025s to extract the tarball
	I0617 12:01:35.994255  165060 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:01:36.034978  165060 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:36.087255  165060 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 12:01:36.087281  165060 cache_images.go:84] Images are preloaded, skipping loading
	I0617 12:01:36.087291  165060 kubeadm.go:928] updating node { 192.168.72.199 8443 v1.30.1 crio true true} ...
	I0617 12:01:36.087447  165060 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-136195 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-136195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:01:36.087551  165060 ssh_runner.go:195] Run: crio config
	I0617 12:01:36.130409  165060 cni.go:84] Creating CNI manager for ""
	I0617 12:01:36.130433  165060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:36.130449  165060 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:01:36.130479  165060 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.199 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-136195 NodeName:embed-certs-136195 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:01:36.130633  165060 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-136195"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.199
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.199"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:01:36.130724  165060 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:01:36.141027  165060 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:01:36.141110  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:01:36.150748  165060 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0617 12:01:36.167282  165060 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:01:36.183594  165060 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0617 12:01:36.202494  165060 ssh_runner.go:195] Run: grep 192.168.72.199	control-plane.minikube.internal$ /etc/hosts
	I0617 12:01:36.206515  165060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:36.218598  165060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:36.344280  165060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:36.361127  165060 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195 for IP: 192.168.72.199
	I0617 12:01:36.361152  165060 certs.go:194] generating shared ca certs ...
	I0617 12:01:36.361172  165060 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:36.361370  165060 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:01:36.361425  165060 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:01:36.361438  165060 certs.go:256] generating profile certs ...
	I0617 12:01:36.361557  165060 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/client.key
	I0617 12:01:36.361648  165060 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/apiserver.key.f7068429
	I0617 12:01:36.361696  165060 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/proxy-client.key
	I0617 12:01:36.361863  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:01:36.361913  165060 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:01:36.361925  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:01:36.361951  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:01:36.361984  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:01:36.362005  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:01:36.362041  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:36.362770  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:01:36.397257  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:01:36.422523  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:01:36.451342  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:01:36.485234  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0617 12:01:36.514351  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:01:36.544125  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:01:36.567574  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:01:36.590417  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:01:36.613174  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:01:36.636187  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:01:36.659365  165060 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:01:36.675981  165060 ssh_runner.go:195] Run: openssl version
	I0617 12:01:36.681694  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:01:36.692324  165060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:01:36.696871  165060 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:01:36.696938  165060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:01:36.702794  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:01:36.713372  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:01:36.724054  165060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:01:36.728505  165060 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:01:36.728566  165060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:01:36.734082  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:01:36.744542  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:01:36.755445  165060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:36.759880  165060 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:36.759922  165060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:36.765367  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:01:36.776234  165060 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:01:36.780822  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:01:36.786895  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:01:36.793358  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:01:36.800187  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:01:36.806591  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:01:36.812681  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:01:36.818814  165060 kubeadm.go:391] StartCluster: {Name:embed-certs-136195 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-136195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:01:36.818903  165060 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:01:36.818945  165060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:36.861839  165060 cri.go:89] found id: ""
	I0617 12:01:36.861920  165060 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:01:36.873500  165060 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:01:36.873529  165060 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:01:36.873551  165060 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:01:36.873602  165060 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:01:36.884767  165060 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:01:36.886013  165060 kubeconfig.go:125] found "embed-certs-136195" server: "https://192.168.72.199:8443"
	I0617 12:01:36.888144  165060 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:01:36.899204  165060 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.199
	I0617 12:01:36.899248  165060 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:01:36.899263  165060 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:01:36.899325  165060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:36.941699  165060 cri.go:89] found id: ""
	I0617 12:01:36.941782  165060 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:01:36.960397  165060 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:01:36.971254  165060 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:01:36.971276  165060 kubeadm.go:156] found existing configuration files:
	
	I0617 12:01:36.971333  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:01:36.981367  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:01:36.981448  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:01:36.991878  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:01:37.001741  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:01:37.001816  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:01:37.012170  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:01:37.021914  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:01:37.021979  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:01:37.031866  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:01:37.041657  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:01:37.041706  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:01:37.051440  165060 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:01:37.062543  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:37.175190  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:37.872053  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:38.085732  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:38.146895  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:38.208633  165060 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:01:38.208898  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:38.119805  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:38.297858  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:38.297905  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:38.120293  166594 retry.go:31] will retry after 1.769592858s: waiting for machine to come up
	I0617 12:01:39.892495  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:39.893035  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:39.893065  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:39.892948  166594 retry.go:31] will retry after 1.954570801s: waiting for machine to come up
	I0617 12:01:41.849587  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:41.850111  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:41.850140  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:41.850067  166594 retry.go:31] will retry after 3.44879626s: waiting for machine to come up
	I0617 12:01:38.708936  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:39.209014  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:39.709765  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:39.728309  165060 api_server.go:72] duration metric: took 1.519672652s to wait for apiserver process to appear ...
	I0617 12:01:39.728342  165060 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:01:39.728369  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:42.756054  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:01:42.756089  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:01:42.756105  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:42.797646  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:01:42.797689  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:01:43.229201  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:43.233440  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:01:43.233467  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:01:43.728490  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:43.741000  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:01:43.741037  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:01:44.228634  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:44.232839  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 200:
	ok
	I0617 12:01:44.238582  165060 api_server.go:141] control plane version: v1.30.1
	I0617 12:01:44.238606  165060 api_server.go:131] duration metric: took 4.510256755s to wait for apiserver health ...
	I0617 12:01:44.238615  165060 cni.go:84] Creating CNI manager for ""
	I0617 12:01:44.238622  165060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:44.240569  165060 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:01:44.241963  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:01:44.253143  165060 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:01:44.286772  165060 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:01:44.295697  165060 system_pods.go:59] 8 kube-system pods found
	I0617 12:01:44.295736  165060 system_pods.go:61] "coredns-7db6d8ff4d-9bbjg" [1ba0eee5-436e-4c83-b5ce-3c907d66b641] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0617 12:01:44.295744  165060 system_pods.go:61] "etcd-embed-certs-136195" [6dc81a80-c56b-4517-af82-c450cf9578f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0617 12:01:44.295757  165060 system_pods.go:61] "kube-apiserver-embed-certs-136195" [bd61a715-2471-4dca-aa48-a157531ebd6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 12:01:44.295763  165060 system_pods.go:61] "kube-controller-manager-embed-certs-136195" [194db4b0-75c2-4905-8e4d-813185497b51] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 12:01:44.295768  165060 system_pods.go:61] "kube-proxy-25d5n" [52b6d09a-899f-40c4-b1f3-7842ae755165] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0617 12:01:44.295774  165060 system_pods.go:61] "kube-scheduler-embed-certs-136195" [b04d3798-f465-4f82-9ec7-777ea62d5b94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 12:01:44.295782  165060 system_pods.go:61] "metrics-server-569cc877fc-dmhfs" [31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:01:44.295788  165060 system_pods.go:61] "storage-provisioner" [4b04a38a-5006-4496-a24d-0940029193de] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 12:01:44.295797  165060 system_pods.go:74] duration metric: took 9.004741ms to wait for pod list to return data ...
	I0617 12:01:44.295811  165060 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:01:44.298934  165060 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:01:44.298968  165060 node_conditions.go:123] node cpu capacity is 2
	I0617 12:01:44.298989  165060 node_conditions.go:105] duration metric: took 3.172465ms to run NodePressure ...
	I0617 12:01:44.299027  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:44.565943  165060 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 12:01:44.570796  165060 kubeadm.go:733] kubelet initialised
	I0617 12:01:44.570825  165060 kubeadm.go:734] duration metric: took 4.851024ms waiting for restarted kubelet to initialise ...
	I0617 12:01:44.570836  165060 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:01:44.575565  165060 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.582180  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.582209  165060 pod_ready.go:81] duration metric: took 6.620747ms for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.582221  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.582231  165060 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.586828  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "etcd-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.586850  165060 pod_ready.go:81] duration metric: took 4.61059ms for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.586859  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "etcd-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.586866  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.591162  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.591189  165060 pod_ready.go:81] duration metric: took 4.316651ms for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.591197  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.591204  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.690269  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.690301  165060 pod_ready.go:81] duration metric: took 99.088803ms for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.690310  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.690317  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:45.089616  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-proxy-25d5n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.089640  165060 pod_ready.go:81] duration metric: took 399.31511ms for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:45.089649  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-proxy-25d5n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.089656  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:45.491031  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.491058  165060 pod_ready.go:81] duration metric: took 401.395966ms for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:45.491068  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.491074  165060 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:45.890606  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.890633  165060 pod_ready.go:81] duration metric: took 399.550946ms for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:45.890644  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.890650  165060 pod_ready.go:38] duration metric: took 1.319802914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:01:45.890669  165060 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:01:45.903900  165060 ops.go:34] apiserver oom_adj: -16
	I0617 12:01:45.903936  165060 kubeadm.go:591] duration metric: took 9.03037731s to restartPrimaryControlPlane
	I0617 12:01:45.903950  165060 kubeadm.go:393] duration metric: took 9.085142288s to StartCluster
	I0617 12:01:45.903974  165060 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:45.904063  165060 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:01:45.905636  165060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:45.905908  165060 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:01:45.907817  165060 out.go:177] * Verifying Kubernetes components...
	I0617 12:01:45.905981  165060 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:01:45.907852  165060 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-136195"
	I0617 12:01:45.907880  165060 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-136195"
	W0617 12:01:45.907890  165060 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:01:45.907903  165060 addons.go:69] Setting default-storageclass=true in profile "embed-certs-136195"
	I0617 12:01:45.906085  165060 config.go:182] Loaded profile config "embed-certs-136195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:01:45.909296  165060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:45.907923  165060 host.go:66] Checking if "embed-certs-136195" exists ...
	I0617 12:01:45.907924  165060 addons.go:69] Setting metrics-server=true in profile "embed-certs-136195"
	I0617 12:01:45.909472  165060 addons.go:234] Setting addon metrics-server=true in "embed-certs-136195"
	W0617 12:01:45.909481  165060 addons.go:243] addon metrics-server should already be in state true
	I0617 12:01:45.909506  165060 host.go:66] Checking if "embed-certs-136195" exists ...
	I0617 12:01:45.907954  165060 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-136195"
	I0617 12:01:45.909776  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.909822  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.909836  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.909861  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.909841  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.909928  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.925250  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36545
	I0617 12:01:45.925500  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38767
	I0617 12:01:45.925708  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.925929  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.926262  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.926282  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.926420  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.926445  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.926637  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.926728  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.927142  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.927171  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.927206  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.927236  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.929198  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
	I0617 12:01:45.929658  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.930137  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.930159  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.930465  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.930661  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.934085  165060 addons.go:234] Setting addon default-storageclass=true in "embed-certs-136195"
	W0617 12:01:45.934107  165060 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:01:45.934139  165060 host.go:66] Checking if "embed-certs-136195" exists ...
	I0617 12:01:45.934534  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.934579  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.944472  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I0617 12:01:45.945034  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.945712  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.945741  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.946105  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.946343  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.946673  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43225
	I0617 12:01:45.947007  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.947706  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.947725  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.948027  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.948228  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.948359  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:45.950451  165060 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:01:45.951705  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:01:45.951719  165060 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:01:45.951735  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:45.949626  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:45.951588  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43695
	I0617 12:01:45.953222  165060 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:45.954471  165060 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:01:45.952290  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.954494  165060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:01:45.954514  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:45.955079  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.955098  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.955123  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.955478  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.955718  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:45.955757  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.955924  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:45.956099  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.956106  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:45.956147  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.956374  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:45.956507  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:45.957756  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.958184  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:45.958206  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.958335  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:45.958505  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:45.958680  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:45.958825  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:45.977247  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39751
	I0617 12:01:45.977663  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.978179  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.978203  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.978524  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.978711  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.980425  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:45.980601  165060 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:01:45.980616  165060 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:01:45.980630  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:45.983633  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.984088  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:45.984105  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.984258  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:45.984377  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:45.984505  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:45.984661  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:46.093292  165060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:46.112779  165060 node_ready.go:35] waiting up to 6m0s for node "embed-certs-136195" to be "Ready" ...
	I0617 12:01:46.182239  165060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:01:46.248534  165060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:01:46.286637  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:01:46.286662  165060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:01:46.313951  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:01:46.313981  165060 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:01:46.337155  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:01:46.337186  165060 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:01:46.389025  165060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:01:46.548086  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:46.548106  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:46.548442  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:46.548461  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:46.548471  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:46.548481  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:46.548485  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:46.548727  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:46.548744  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:46.548764  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:46.554199  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:46.554218  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:46.554454  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:46.554469  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:46.554480  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.142290  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.142321  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.142629  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.142658  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.142671  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.142676  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.142692  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.142943  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.142971  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.142985  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.216339  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.216366  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.216658  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.216679  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.216690  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.216700  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.216709  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.216931  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.216967  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.216982  165060 addons.go:475] Verifying addon metrics-server=true in "embed-certs-136195"
	I0617 12:01:47.219627  165060 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0617 12:01:45.300413  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:45.300848  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:45.300878  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:45.300794  166594 retry.go:31] will retry after 3.892148485s: waiting for machine to come up
	I0617 12:01:47.220905  165060 addons.go:510] duration metric: took 1.314925386s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0617 12:01:48.116197  165060 node_ready.go:53] node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:50.500448  166103 start.go:364] duration metric: took 2m12.970832528s to acquireMachinesLock for "default-k8s-diff-port-991309"
	I0617 12:01:50.500511  166103 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:50.500534  166103 fix.go:54] fixHost starting: 
	I0617 12:01:50.500980  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:50.501018  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:50.517593  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43641
	I0617 12:01:50.518035  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:50.518600  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:01:50.518635  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:50.519051  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:50.519296  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:01:50.519502  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:01:50.521095  166103 fix.go:112] recreateIfNeeded on default-k8s-diff-port-991309: state=Stopped err=<nil>
	I0617 12:01:50.521123  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	W0617 12:01:50.521307  166103 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:50.522795  166103 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-991309" ...
	I0617 12:01:49.197189  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.197671  165698 main.go:141] libmachine: (old-k8s-version-003661) Found IP for machine: 192.168.61.164
	I0617 12:01:49.197697  165698 main.go:141] libmachine: (old-k8s-version-003661) Reserving static IP address...
	I0617 12:01:49.197714  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has current primary IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.198147  165698 main.go:141] libmachine: (old-k8s-version-003661) Reserved static IP address: 192.168.61.164
	I0617 12:01:49.198175  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "old-k8s-version-003661", mac: "52:54:00:76:66:a0", ip: "192.168.61.164"} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.198185  165698 main.go:141] libmachine: (old-k8s-version-003661) Waiting for SSH to be available...
	I0617 12:01:49.198217  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | skip adding static IP to network mk-old-k8s-version-003661 - found existing host DHCP lease matching {name: "old-k8s-version-003661", mac: "52:54:00:76:66:a0", ip: "192.168.61.164"}
	I0617 12:01:49.198227  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Getting to WaitForSSH function...
	I0617 12:01:49.200478  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.200907  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.200935  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.201088  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Using SSH client type: external
	I0617 12:01:49.201116  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa (-rw-------)
	I0617 12:01:49.201154  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:01:49.201169  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | About to run SSH command:
	I0617 12:01:49.201183  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | exit 0
	I0617 12:01:49.323763  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | SSH cmd err, output: <nil>: 
	I0617 12:01:49.324127  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetConfigRaw
	I0617 12:01:49.324835  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:49.327217  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.327628  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.327660  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.327891  165698 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/config.json ...
	I0617 12:01:49.328097  165698 machine.go:94] provisionDockerMachine start ...
	I0617 12:01:49.328120  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:49.328365  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.330587  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.330992  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.331033  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.331160  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.331324  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.331490  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.331637  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.331824  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.332037  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.332049  165698 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:01:49.432170  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:01:49.432201  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.432498  165698 buildroot.go:166] provisioning hostname "old-k8s-version-003661"
	I0617 12:01:49.432524  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.432730  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.435845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.436276  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.436317  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.436507  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.436708  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.436909  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.437074  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.437289  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.437496  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.437510  165698 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-003661 && echo "old-k8s-version-003661" | sudo tee /etc/hostname
	I0617 12:01:49.550158  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-003661
	
	I0617 12:01:49.550187  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.553141  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.553509  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.553539  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.553737  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.553943  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.554141  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.554298  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.554520  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.554759  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.554787  165698 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-003661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-003661/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-003661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:01:49.661049  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:49.661079  165698 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:01:49.661106  165698 buildroot.go:174] setting up certificates
	I0617 12:01:49.661115  165698 provision.go:84] configureAuth start
	I0617 12:01:49.661124  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.661452  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:49.664166  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.664561  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.664591  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.664723  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.666845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.667114  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.667158  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.667287  165698 provision.go:143] copyHostCerts
	I0617 12:01:49.667377  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:01:49.667387  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:01:49.667440  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:01:49.667561  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:01:49.667571  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:01:49.667594  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:01:49.667649  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:01:49.667656  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:01:49.667674  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:01:49.667722  165698 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-003661 san=[127.0.0.1 192.168.61.164 localhost minikube old-k8s-version-003661]
	I0617 12:01:49.853671  165698 provision.go:177] copyRemoteCerts
	I0617 12:01:49.853736  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:01:49.853767  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.856171  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.856540  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.856577  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.856737  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.857071  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.857220  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.857360  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:49.938626  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:01:49.964401  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0617 12:01:49.988397  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0617 12:01:50.013356  165698 provision.go:87] duration metric: took 352.227211ms to configureAuth
	I0617 12:01:50.013382  165698 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:01:50.013581  165698 config.go:182] Loaded profile config "old-k8s-version-003661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0617 12:01:50.013689  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.016168  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.016514  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.016548  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.016657  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.016847  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.017025  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.017152  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.017300  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:50.017483  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:50.017505  165698 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:01:50.280037  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:01:50.280065  165698 machine.go:97] duration metric: took 951.954687ms to provisionDockerMachine
	I0617 12:01:50.280076  165698 start.go:293] postStartSetup for "old-k8s-version-003661" (driver="kvm2")
	I0617 12:01:50.280086  165698 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:01:50.280102  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.280467  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:01:50.280506  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.283318  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.283657  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.283684  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.283874  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.284106  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.284279  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.284402  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.362452  165698 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:01:50.366699  165698 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:01:50.366726  165698 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:01:50.366788  165698 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:01:50.366878  165698 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:01:50.367004  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:01:50.376706  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:50.399521  165698 start.go:296] duration metric: took 119.43167ms for postStartSetup
	I0617 12:01:50.399558  165698 fix.go:56] duration metric: took 19.670946478s for fixHost
	I0617 12:01:50.399578  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.402079  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.402465  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.402500  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.402649  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.402835  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.402994  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.403138  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.403321  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:50.403529  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:50.403541  165698 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:01:50.500267  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625710.471154465
	
	I0617 12:01:50.500294  165698 fix.go:216] guest clock: 1718625710.471154465
	I0617 12:01:50.500304  165698 fix.go:229] Guest: 2024-06-17 12:01:50.471154465 +0000 UTC Remote: 2024-06-17 12:01:50.399561534 +0000 UTC m=+212.458541959 (delta=71.592931ms)
	I0617 12:01:50.500350  165698 fix.go:200] guest clock delta is within tolerance: 71.592931ms
	I0617 12:01:50.500355  165698 start.go:83] releasing machines lock for "old-k8s-version-003661", held for 19.771784344s
	I0617 12:01:50.500380  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.500648  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:50.503346  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.503749  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.503776  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.503974  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504536  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504676  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504750  165698 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:01:50.504801  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.504861  165698 ssh_runner.go:195] Run: cat /version.json
	I0617 12:01:50.504890  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.507577  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.507736  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508013  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.508041  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508176  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.508200  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508205  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.508335  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.508419  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.508499  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.508580  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.508691  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.508717  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.508830  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.585030  165698 ssh_runner.go:195] Run: systemctl --version
	I0617 12:01:50.612492  165698 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:01:50.765842  165698 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:01:50.773214  165698 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:01:50.773288  165698 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:01:50.793397  165698 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:01:50.793424  165698 start.go:494] detecting cgroup driver to use...
	I0617 12:01:50.793499  165698 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:01:50.811531  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:01:50.826223  165698 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:01:50.826289  165698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:01:50.840517  165698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:01:50.854788  165698 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:01:50.970328  165698 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:01:51.125815  165698 docker.go:233] disabling docker service ...
	I0617 12:01:51.125893  165698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:01:51.146368  165698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:01:51.161459  165698 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:01:51.346032  165698 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:01:51.503395  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:01:51.521021  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:01:51.543851  165698 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0617 12:01:51.543905  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.556230  165698 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:01:51.556309  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.573061  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.588663  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.601086  165698 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:01:51.617347  165698 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:01:51.634502  165698 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:01:51.634635  165698 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:01:51.652813  165698 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:01:51.665145  165698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:51.826713  165698 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:01:51.981094  165698 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:01:51.981186  165698 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:01:51.986026  165698 start.go:562] Will wait 60s for crictl version
	I0617 12:01:51.986091  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:51.990253  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:01:52.032543  165698 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:01:52.032631  165698 ssh_runner.go:195] Run: crio --version
	I0617 12:01:52.063904  165698 ssh_runner.go:195] Run: crio --version
	I0617 12:01:52.097158  165698 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0617 12:01:50.524130  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Start
	I0617 12:01:50.524321  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Ensuring networks are active...
	I0617 12:01:50.524939  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Ensuring network default is active
	I0617 12:01:50.525300  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Ensuring network mk-default-k8s-diff-port-991309 is active
	I0617 12:01:50.527342  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Getting domain xml...
	I0617 12:01:50.528126  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Creating domain...
	I0617 12:01:51.864887  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting to get IP...
	I0617 12:01:51.865835  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:51.866246  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:51.866328  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:51.866228  166802 retry.go:31] will retry after 200.163407ms: waiting for machine to come up
	I0617 12:01:52.067708  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.068164  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.068193  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:52.068119  166802 retry.go:31] will retry after 364.503903ms: waiting for machine to come up
	I0617 12:01:52.098675  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:52.102187  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:52.102572  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:52.102603  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:52.102823  165698 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0617 12:01:52.107573  165698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:52.121312  165698 kubeadm.go:877] updating cluster {Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.164 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:01:52.121448  165698 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0617 12:01:52.121515  165698 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:52.181796  165698 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0617 12:01:52.181891  165698 ssh_runner.go:195] Run: which lz4
	I0617 12:01:52.186827  165698 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:01:52.191806  165698 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:01:52.191875  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0617 12:01:50.116573  165060 node_ready.go:53] node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:52.122162  165060 node_ready.go:53] node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:53.117556  165060 node_ready.go:49] node "embed-certs-136195" has status "Ready":"True"
	I0617 12:01:53.117589  165060 node_ready.go:38] duration metric: took 7.004769746s for node "embed-certs-136195" to be "Ready" ...
	I0617 12:01:53.117598  165060 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:01:53.125606  165060 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:53.131618  165060 pod_ready.go:92] pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:53.131643  165060 pod_ready.go:81] duration metric: took 6.000929ms for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:53.131654  165060 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:52.434791  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.435584  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.435740  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:52.435665  166802 retry.go:31] will retry after 486.514518ms: waiting for machine to come up
	I0617 12:01:52.924190  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.924819  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.924845  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:52.924681  166802 retry.go:31] will retry after 520.971301ms: waiting for machine to come up
	I0617 12:01:53.447437  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:53.447965  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:53.447995  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:53.447919  166802 retry.go:31] will retry after 622.761044ms: waiting for machine to come up
	I0617 12:01:54.072700  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.073170  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.073202  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:54.073112  166802 retry.go:31] will retry after 671.940079ms: waiting for machine to come up
	I0617 12:01:54.746830  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.747342  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.747372  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:54.747310  166802 retry.go:31] will retry after 734.856022ms: waiting for machine to come up
	I0617 12:01:55.484571  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:55.485127  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:55.485157  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:55.485066  166802 retry.go:31] will retry after 1.198669701s: waiting for machine to come up
	I0617 12:01:56.685201  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:56.685468  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:56.685493  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:56.685440  166802 retry.go:31] will retry after 1.562509853s: waiting for machine to come up
	I0617 12:01:54.026903  165698 crio.go:462] duration metric: took 1.840117639s to copy over tarball
	I0617 12:01:54.027003  165698 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:01:57.049870  165698 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.022814584s)
	I0617 12:01:57.049904  165698 crio.go:469] duration metric: took 3.022967677s to extract the tarball
	I0617 12:01:57.049914  165698 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:01:57.094589  165698 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:57.133299  165698 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0617 12:01:57.133331  165698 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.133451  165698 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.133456  165698 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.133477  165698 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.133530  165698 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.133626  165698 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.135990  165698 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.135994  165698 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.135985  165698 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0617 12:01:57.136041  165698 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.136041  165698 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.289271  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.299061  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.322581  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.336462  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.337619  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.350335  165698 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0617 12:01:57.350395  165698 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.350448  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.357972  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0617 12:01:57.391517  165698 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0617 12:01:57.391563  165698 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.391640  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.419438  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.442111  165698 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0617 12:01:57.442154  165698 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.442200  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.450145  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.485873  165698 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0617 12:01:57.485922  165698 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0617 12:01:57.485942  165698 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.485957  165698 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.485996  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.486003  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.486053  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.490584  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.490669  165698 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0617 12:01:57.490714  165698 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0617 12:01:57.490755  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.551564  165698 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0617 12:01:57.551597  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.551619  165698 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.551662  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.660683  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0617 12:01:57.660732  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.660799  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0617 12:01:57.660856  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0617 12:01:57.660734  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.660903  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0617 12:01:57.660930  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.753965  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0617 12:01:57.753981  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0617 12:01:57.754069  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0617 12:01:57.754069  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0617 12:01:57.754146  165698 cache_images.go:92] duration metric: took 620.797178ms to LoadCachedImages
	W0617 12:01:57.754271  165698 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0617 12:01:57.754292  165698 kubeadm.go:928] updating node { 192.168.61.164 8443 v1.20.0 crio true true} ...
	I0617 12:01:57.754415  165698 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-003661 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:01:57.754489  165698 ssh_runner.go:195] Run: crio config
	I0617 12:01:57.807120  165698 cni.go:84] Creating CNI manager for ""
	I0617 12:01:57.807144  165698 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:57.807158  165698 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:01:57.807182  165698 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.164 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-003661 NodeName:old-k8s-version-003661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0617 12:01:57.807370  165698 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-003661"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:01:57.807437  165698 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0617 12:01:57.817865  165698 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:01:57.817940  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:01:57.829796  165698 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0617 12:01:57.847758  165698 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:01:57.866182  165698 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0617 12:01:57.884500  165698 ssh_runner.go:195] Run: grep 192.168.61.164	control-plane.minikube.internal$ /etc/hosts
	I0617 12:01:57.888852  165698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:57.902176  165698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:55.138418  165060 pod_ready.go:102] pod "etcd-embed-certs-136195" in "kube-system" namespace has status "Ready":"False"
	I0617 12:01:55.641014  165060 pod_ready.go:92] pod "etcd-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:55.641047  165060 pod_ready.go:81] duration metric: took 2.509383461s for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:55.641061  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.151759  165060 pod_ready.go:92] pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.151788  165060 pod_ready.go:81] duration metric: took 510.718192ms for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.152027  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.157234  165060 pod_ready.go:92] pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.157260  165060 pod_ready.go:81] duration metric: took 5.220069ms for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.157273  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.161767  165060 pod_ready.go:92] pod "kube-proxy-25d5n" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.161787  165060 pod_ready.go:81] duration metric: took 4.50732ms for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.161796  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.717763  165060 pod_ready.go:92] pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.717865  165060 pod_ready.go:81] duration metric: took 556.058292ms for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.717892  165060 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:58.249594  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:58.250033  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:58.250069  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:58.250019  166802 retry.go:31] will retry after 2.154567648s: waiting for machine to come up
	I0617 12:02:00.406269  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:00.406668  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:02:00.406702  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:02:00.406615  166802 retry.go:31] will retry after 2.065044206s: waiting for machine to come up
	I0617 12:01:58.049361  165698 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:58.067893  165698 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661 for IP: 192.168.61.164
	I0617 12:01:58.067924  165698 certs.go:194] generating shared ca certs ...
	I0617 12:01:58.067945  165698 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:58.068162  165698 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:01:58.068221  165698 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:01:58.068236  165698 certs.go:256] generating profile certs ...
	I0617 12:01:58.068352  165698 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.key
	I0617 12:01:58.068438  165698 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key.6c1f259c
	I0617 12:01:58.068493  165698 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.key
	I0617 12:01:58.068647  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:01:58.068690  165698 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:01:58.068704  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:01:58.068743  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:01:58.068790  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:01:58.068824  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:01:58.068877  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:58.069548  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:01:58.109048  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:01:58.134825  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:01:58.159910  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:01:58.191108  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0617 12:01:58.217407  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:01:58.242626  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:01:58.267261  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0617 12:01:58.291562  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:01:58.321848  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:01:58.352361  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:01:58.379343  165698 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:01:58.399146  165698 ssh_runner.go:195] Run: openssl version
	I0617 12:01:58.405081  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:01:58.415471  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.420046  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.420099  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.425886  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:01:58.436575  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:01:58.447166  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.451523  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.451582  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.457670  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:01:58.468667  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:01:58.479095  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.483744  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.483796  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.489520  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:01:58.500298  165698 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:01:58.504859  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:01:58.510619  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:01:58.516819  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:01:58.522837  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:01:58.528736  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:01:58.534585  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:01:58.540464  165698 kubeadm.go:391] StartCluster: {Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.164 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:01:58.540549  165698 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:01:58.540624  165698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:58.583638  165698 cri.go:89] found id: ""
	I0617 12:01:58.583724  165698 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:01:58.594266  165698 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:01:58.594290  165698 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:01:58.594295  165698 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:01:58.594354  165698 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:01:58.604415  165698 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:01:58.605367  165698 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-003661" does not appear in /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:01:58.605949  165698 kubeconfig.go:62] /home/jenkins/minikube-integration/19084-112967/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-003661" cluster setting kubeconfig missing "old-k8s-version-003661" context setting]
	I0617 12:01:58.606833  165698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:58.662621  165698 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:01:58.673813  165698 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.164
	I0617 12:01:58.673848  165698 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:01:58.673863  165698 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:01:58.673907  165698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:58.712607  165698 cri.go:89] found id: ""
	I0617 12:01:58.712703  165698 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:01:58.731676  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:01:58.741645  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:01:58.741666  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:01:58.741709  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:01:58.750871  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:01:58.750931  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:01:58.760545  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:01:58.769701  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:01:58.769776  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:01:58.779348  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:01:58.788507  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:01:58.788566  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:01:58.799220  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:01:58.808403  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:01:58.808468  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:01:58.818169  165698 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:01:58.828079  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:58.962164  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:59.679319  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:59.903216  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:00.026243  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:00.126201  165698 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:00.126314  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:00.627227  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:01.126836  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:01.626524  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:02.126619  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:02.626434  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:58.727229  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:01.226021  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:02.473035  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:02.473477  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:02:02.473505  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:02:02.473458  166802 retry.go:31] will retry after 3.132988331s: waiting for machine to come up
	I0617 12:02:05.607981  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:05.608354  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:02:05.608391  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:02:05.608310  166802 retry.go:31] will retry after 3.312972752s: waiting for machine to come up
	I0617 12:02:03.126687  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:03.626469  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:04.126347  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:04.626548  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:05.127142  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:05.626937  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:06.126479  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:06.626466  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:07.126806  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:07.626814  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:03.724216  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:06.224335  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:08.224842  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:10.217135  164809 start.go:364] duration metric: took 54.298812889s to acquireMachinesLock for "no-preload-152830"
	I0617 12:02:10.217192  164809 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:02:10.217204  164809 fix.go:54] fixHost starting: 
	I0617 12:02:10.217633  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:10.217673  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:10.238636  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44149
	I0617 12:02:10.239091  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:10.239596  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:02:10.239622  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:10.239997  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:10.240214  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:10.240397  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:02:10.242141  164809 fix.go:112] recreateIfNeeded on no-preload-152830: state=Stopped err=<nil>
	I0617 12:02:10.242162  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	W0617 12:02:10.242324  164809 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:02:10.244888  164809 out.go:177] * Restarting existing kvm2 VM for "no-preload-152830" ...
	I0617 12:02:08.922547  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.922966  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Found IP for machine: 192.168.50.125
	I0617 12:02:08.922996  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Reserving static IP address...
	I0617 12:02:08.923013  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has current primary IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.923437  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-991309", mac: "52:54:00:4e:6e:f5", ip: "192.168.50.125"} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:08.923484  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Reserved static IP address: 192.168.50.125
	I0617 12:02:08.923514  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | skip adding static IP to network mk-default-k8s-diff-port-991309 - found existing host DHCP lease matching {name: "default-k8s-diff-port-991309", mac: "52:54:00:4e:6e:f5", ip: "192.168.50.125"}
	I0617 12:02:08.923533  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Getting to WaitForSSH function...
	I0617 12:02:08.923550  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for SSH to be available...
	I0617 12:02:08.925667  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.926017  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:08.926050  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.926203  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Using SSH client type: external
	I0617 12:02:08.926228  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa (-rw-------)
	I0617 12:02:08.926269  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:02:08.926290  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | About to run SSH command:
	I0617 12:02:08.926316  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | exit 0
	I0617 12:02:09.051973  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | SSH cmd err, output: <nil>: 
	I0617 12:02:09.052329  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetConfigRaw
	I0617 12:02:09.052946  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:09.055156  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.055509  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.055541  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.055748  166103 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/config.json ...
	I0617 12:02:09.055940  166103 machine.go:94] provisionDockerMachine start ...
	I0617 12:02:09.055960  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:09.056162  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.058451  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.058826  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.058860  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.058961  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.059155  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.059289  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.059440  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.059583  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.059796  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.059813  166103 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:02:09.163974  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:02:09.164020  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetMachineName
	I0617 12:02:09.164281  166103 buildroot.go:166] provisioning hostname "default-k8s-diff-port-991309"
	I0617 12:02:09.164312  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetMachineName
	I0617 12:02:09.164499  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.167194  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.167606  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.167632  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.167856  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.168097  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.168285  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.168414  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.168571  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.168795  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.168811  166103 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-991309 && echo "default-k8s-diff-port-991309" | sudo tee /etc/hostname
	I0617 12:02:09.290435  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-991309
	
	I0617 12:02:09.290470  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.293538  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.293879  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.293902  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.294132  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.294361  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.294574  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.294753  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.294943  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.295188  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.295209  166103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-991309' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-991309/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-991309' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:02:09.408702  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:02:09.408736  166103 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:02:09.408777  166103 buildroot.go:174] setting up certificates
	I0617 12:02:09.408789  166103 provision.go:84] configureAuth start
	I0617 12:02:09.408798  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetMachineName
	I0617 12:02:09.409122  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:09.411936  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.412304  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.412335  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.412522  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.414598  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.414914  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.414942  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.415054  166103 provision.go:143] copyHostCerts
	I0617 12:02:09.415121  166103 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:02:09.415132  166103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:02:09.415182  166103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:02:09.415264  166103 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:02:09.415271  166103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:02:09.415290  166103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:02:09.415344  166103 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:02:09.415353  166103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:02:09.415378  166103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:02:09.415439  166103 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-991309 san=[127.0.0.1 192.168.50.125 default-k8s-diff-port-991309 localhost minikube]
	I0617 12:02:09.534010  166103 provision.go:177] copyRemoteCerts
	I0617 12:02:09.534082  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:02:09.534121  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.536707  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.537143  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.537176  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.537352  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.537516  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.537687  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.537840  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:09.622292  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0617 12:02:09.652653  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:02:09.676801  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:02:09.700701  166103 provision.go:87] duration metric: took 291.898478ms to configureAuth
	I0617 12:02:09.700734  166103 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:02:09.700931  166103 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:02:09.701023  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.703710  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.704138  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.704171  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.704330  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.704537  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.704730  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.704895  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.705058  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.705243  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.705262  166103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:02:09.974077  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:02:09.974109  166103 machine.go:97] duration metric: took 918.156221ms to provisionDockerMachine
	I0617 12:02:09.974120  166103 start.go:293] postStartSetup for "default-k8s-diff-port-991309" (driver="kvm2")
	I0617 12:02:09.974131  166103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:02:09.974155  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:09.974502  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:02:09.974544  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.977677  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.978073  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.978097  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.978225  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.978407  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.978583  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.978734  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:10.067068  166103 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:02:10.071843  166103 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:02:10.071870  166103 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:02:10.071934  166103 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:02:10.072024  166103 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:02:10.072128  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:02:10.082041  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:10.107855  166103 start.go:296] duration metric: took 133.717924ms for postStartSetup
	I0617 12:02:10.107903  166103 fix.go:56] duration metric: took 19.607369349s for fixHost
	I0617 12:02:10.107932  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:10.110742  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.111135  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.111169  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.111294  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:10.111527  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.111674  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.111861  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:10.111980  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:10.112205  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:10.112220  166103 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:02:10.216945  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625730.186446687
	
	I0617 12:02:10.216973  166103 fix.go:216] guest clock: 1718625730.186446687
	I0617 12:02:10.216983  166103 fix.go:229] Guest: 2024-06-17 12:02:10.186446687 +0000 UTC Remote: 2024-06-17 12:02:10.107909348 +0000 UTC m=+152.716337101 (delta=78.537339ms)
	I0617 12:02:10.217033  166103 fix.go:200] guest clock delta is within tolerance: 78.537339ms
	I0617 12:02:10.217039  166103 start.go:83] releasing machines lock for "default-k8s-diff-port-991309", held for 19.716554323s
	I0617 12:02:10.217073  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.217363  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:10.220429  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.220897  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.220927  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.221083  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.221655  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.221870  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.221965  166103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:02:10.222026  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:10.222094  166103 ssh_runner.go:195] Run: cat /version.json
	I0617 12:02:10.222122  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:10.225337  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.225673  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.225710  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.225730  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.226015  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:10.226172  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.226202  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.226242  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.226363  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:10.226447  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:10.226508  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.226591  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:10.226687  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:10.226840  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:10.334316  166103 ssh_runner.go:195] Run: systemctl --version
	I0617 12:02:10.340584  166103 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:02:10.489359  166103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:02:10.497198  166103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:02:10.497267  166103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:02:10.517001  166103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:02:10.517032  166103 start.go:494] detecting cgroup driver to use...
	I0617 12:02:10.517110  166103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:02:10.536520  166103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:02:10.550478  166103 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:02:10.550542  166103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:02:10.564437  166103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:02:10.578554  166103 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:02:10.710346  166103 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:02:10.891637  166103 docker.go:233] disabling docker service ...
	I0617 12:02:10.891694  166103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:02:10.908300  166103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:02:10.921663  166103 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:02:11.062715  166103 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:02:11.201061  166103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:02:11.216120  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:02:11.237213  166103 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:02:11.237286  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.248171  166103 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:02:11.248238  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.259159  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.270217  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.280841  166103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:02:11.291717  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.302084  166103 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.319559  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.331992  166103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:02:11.342435  166103 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:02:11.342494  166103 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:02:11.357436  166103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:02:11.367406  166103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:11.493416  166103 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:02:11.629980  166103 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:02:11.630055  166103 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:02:11.636456  166103 start.go:562] Will wait 60s for crictl version
	I0617 12:02:11.636540  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:02:11.642817  166103 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:02:11.681563  166103 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:02:11.681655  166103 ssh_runner.go:195] Run: crio --version
	I0617 12:02:11.712576  166103 ssh_runner.go:195] Run: crio --version
	I0617 12:02:11.753826  166103 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:02:11.755256  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:11.758628  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:11.759006  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:11.759041  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:11.759252  166103 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0617 12:02:11.763743  166103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:11.780286  166103 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:02:11.780455  166103 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:02:11.780528  166103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:02:11.819396  166103 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:02:11.819481  166103 ssh_runner.go:195] Run: which lz4
	I0617 12:02:11.824047  166103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:02:11.828770  166103 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:02:11.828807  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0617 12:02:08.127233  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:08.626498  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:09.126712  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:09.627284  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.126446  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.627249  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:11.126428  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:11.626638  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:12.127091  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:12.627361  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.226209  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:12.227824  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:10.246388  164809 main.go:141] libmachine: (no-preload-152830) Calling .Start
	I0617 12:02:10.246608  164809 main.go:141] libmachine: (no-preload-152830) Ensuring networks are active...
	I0617 12:02:10.247397  164809 main.go:141] libmachine: (no-preload-152830) Ensuring network default is active
	I0617 12:02:10.247789  164809 main.go:141] libmachine: (no-preload-152830) Ensuring network mk-no-preload-152830 is active
	I0617 12:02:10.248192  164809 main.go:141] libmachine: (no-preload-152830) Getting domain xml...
	I0617 12:02:10.248869  164809 main.go:141] libmachine: (no-preload-152830) Creating domain...
	I0617 12:02:11.500721  164809 main.go:141] libmachine: (no-preload-152830) Waiting to get IP...
	I0617 12:02:11.501614  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:11.502169  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:11.502254  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:11.502131  166976 retry.go:31] will retry after 281.343691ms: waiting for machine to come up
	I0617 12:02:11.785597  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:11.786047  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:11.786082  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:11.785983  166976 retry.go:31] will retry after 303.221815ms: waiting for machine to come up
	I0617 12:02:12.090367  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:12.090919  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:12.090945  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:12.090826  166976 retry.go:31] will retry after 422.250116ms: waiting for machine to come up
	I0617 12:02:12.514456  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:12.515026  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:12.515055  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:12.515001  166976 retry.go:31] will retry after 513.394077ms: waiting for machine to come up
	I0617 12:02:13.029811  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:13.030495  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:13.030522  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:13.030449  166976 retry.go:31] will retry after 596.775921ms: waiting for machine to come up
	I0617 12:02:13.387031  166103 crio.go:462] duration metric: took 1.563017054s to copy over tarball
	I0617 12:02:13.387108  166103 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:02:15.664139  166103 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.276994761s)
	I0617 12:02:15.664177  166103 crio.go:469] duration metric: took 2.277117031s to extract the tarball
	I0617 12:02:15.664188  166103 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:02:15.703690  166103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:02:15.757605  166103 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 12:02:15.757634  166103 cache_images.go:84] Images are preloaded, skipping loading
	I0617 12:02:15.757644  166103 kubeadm.go:928] updating node { 192.168.50.125 8444 v1.30.1 crio true true} ...
	I0617 12:02:15.757784  166103 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-991309 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:02:15.757874  166103 ssh_runner.go:195] Run: crio config
	I0617 12:02:15.808350  166103 cni.go:84] Creating CNI manager for ""
	I0617 12:02:15.808380  166103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:15.808397  166103 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:02:15.808434  166103 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.125 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-991309 NodeName:default-k8s-diff-port-991309 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:02:15.808633  166103 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.125
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-991309"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:02:15.808709  166103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:02:15.818891  166103 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:02:15.818964  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:02:15.828584  166103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0617 12:02:15.846044  166103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:02:15.862572  166103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0617 12:02:15.880042  166103 ssh_runner.go:195] Run: grep 192.168.50.125	control-plane.minikube.internal$ /etc/hosts
	I0617 12:02:15.884470  166103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:15.897031  166103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:16.013826  166103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:02:16.030366  166103 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309 for IP: 192.168.50.125
	I0617 12:02:16.030391  166103 certs.go:194] generating shared ca certs ...
	I0617 12:02:16.030408  166103 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:16.030590  166103 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:02:16.030650  166103 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:02:16.030668  166103 certs.go:256] generating profile certs ...
	I0617 12:02:16.030793  166103 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/client.key
	I0617 12:02:16.030876  166103 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/apiserver.key.02769a34
	I0617 12:02:16.030919  166103 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/proxy-client.key
	I0617 12:02:16.031024  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:02:16.031051  166103 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:02:16.031060  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:02:16.031080  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:02:16.031103  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:02:16.031122  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:02:16.031179  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:16.031991  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:02:16.066789  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:02:16.094522  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:02:16.119693  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:02:16.155810  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0617 12:02:16.186788  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:02:16.221221  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:02:16.248948  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:02:16.273404  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:02:16.296958  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:02:16.320047  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:02:16.349598  166103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:02:16.367499  166103 ssh_runner.go:195] Run: openssl version
	I0617 12:02:16.373596  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:02:16.384778  166103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:02:16.389521  166103 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:02:16.389574  166103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:02:16.395523  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:02:16.406357  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:02:16.417139  166103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:02:16.421629  166103 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:02:16.421679  166103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:02:16.427323  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:02:16.438649  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:02:16.450042  166103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:16.454587  166103 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:16.454636  166103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:16.460677  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:02:16.472886  166103 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:02:16.477630  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:02:16.483844  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:02:16.490123  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:02:16.497606  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:02:16.504066  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:02:16.510597  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:02:16.518270  166103 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:02:16.518371  166103 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:02:16.518439  166103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:16.569103  166103 cri.go:89] found id: ""
	I0617 12:02:16.569179  166103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:02:16.580328  166103 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:02:16.580353  166103 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:02:16.580360  166103 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:02:16.580409  166103 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:02:16.591277  166103 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:02:16.592450  166103 kubeconfig.go:125] found "default-k8s-diff-port-991309" server: "https://192.168.50.125:8444"
	I0617 12:02:16.594770  166103 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:02:16.605669  166103 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.125
	I0617 12:02:16.605728  166103 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:02:16.605745  166103 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:02:16.605810  166103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:16.654529  166103 cri.go:89] found id: ""
	I0617 12:02:16.654620  166103 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:02:16.672923  166103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:02:16.683485  166103 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:02:16.683514  166103 kubeadm.go:156] found existing configuration files:
	
	I0617 12:02:16.683576  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0617 12:02:16.693533  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:02:16.693614  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:02:16.703670  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0617 12:02:16.716352  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:02:16.716413  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:02:16.729336  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0617 12:02:16.739183  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:02:16.739249  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:02:16.748978  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0617 12:02:16.758195  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:02:16.758262  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:02:16.767945  166103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:02:16.777773  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:16.919605  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:13.126836  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:13.626460  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.127261  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.627161  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:15.126580  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:15.627082  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:16.127163  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:16.626524  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:17.126469  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:17.626488  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.728717  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:17.225452  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:13.629097  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:13.629723  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:13.629826  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:13.629705  166976 retry.go:31] will retry after 588.18471ms: waiting for machine to come up
	I0617 12:02:14.219111  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:14.219672  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:14.219704  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:14.219611  166976 retry.go:31] will retry after 889.359727ms: waiting for machine to come up
	I0617 12:02:15.110916  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:15.111528  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:15.111559  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:15.111473  166976 retry.go:31] will retry after 1.139454059s: waiting for machine to come up
	I0617 12:02:16.252051  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:16.252601  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:16.252636  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:16.252534  166976 retry.go:31] will retry after 1.189357648s: waiting for machine to come up
	I0617 12:02:17.443845  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:17.444370  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:17.444403  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:17.444310  166976 retry.go:31] will retry after 1.614769478s: waiting for machine to come up
	I0617 12:02:18.068811  166103 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.149162388s)
	I0617 12:02:18.068870  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:18.301209  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:18.362153  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:18.454577  166103 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:18.454674  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:18.954929  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.454795  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.505453  166103 api_server.go:72] duration metric: took 1.050874914s to wait for apiserver process to appear ...
	I0617 12:02:19.505490  166103 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:02:19.505518  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:19.506056  166103 api_server.go:269] stopped: https://192.168.50.125:8444/healthz: Get "https://192.168.50.125:8444/healthz": dial tcp 192.168.50.125:8444: connect: connection refused
	I0617 12:02:20.005681  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:22.216162  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:22.216214  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:22.216234  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:22.239561  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:22.239635  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:18.126897  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:18.627145  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.126724  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.626498  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:20.126389  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:20.627190  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:21.126480  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:21.627210  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:22.127273  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:22.626691  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.227344  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:21.725689  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:19.061035  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:19.061555  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:19.061588  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:19.061520  166976 retry.go:31] will retry after 2.385838312s: waiting for machine to come up
	I0617 12:02:21.448745  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:21.449239  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:21.449266  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:21.449208  166976 retry.go:31] will retry after 3.308788046s: waiting for machine to come up
	I0617 12:02:22.505636  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:22.509888  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:22.509916  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:23.006285  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:23.011948  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:23.011983  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:23.505640  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:23.510358  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 200:
	ok
	I0617 12:02:23.516663  166103 api_server.go:141] control plane version: v1.30.1
	I0617 12:02:23.516686  166103 api_server.go:131] duration metric: took 4.011188976s to wait for apiserver health ...
	I0617 12:02:23.516694  166103 cni.go:84] Creating CNI manager for ""
	I0617 12:02:23.516700  166103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:23.518498  166103 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:02:23.519722  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:02:23.530145  166103 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:02:23.552805  166103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:02:23.564825  166103 system_pods.go:59] 8 kube-system pods found
	I0617 12:02:23.564853  166103 system_pods.go:61] "coredns-7db6d8ff4d-mnw24" [1e6c4ff3-f0dc-43da-abd8-baaed7dca40c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0617 12:02:23.564863  166103 system_pods.go:61] "etcd-default-k8s-diff-port-991309" [820a4f27-cf83-4edb-a2ea-edba6673d851] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0617 12:02:23.564871  166103 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991309" [26e6c19d-6f70-4924-83f5-563c8508c9e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 12:02:23.564877  166103 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991309" [01e7c468-98a6-48f3-a158-59e97fa8279c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 12:02:23.564885  166103 system_pods.go:61] "kube-proxy-jn5kp" [d6935148-7ee8-4655-8327-9f1ee4c933de] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0617 12:02:23.564894  166103 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991309" [53ecd22c-05cf-48a5-b7e5-925392085f7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 12:02:23.564899  166103 system_pods.go:61] "metrics-server-569cc877fc-n2svp" [5b637d97-3183-4324-98cf-dd69a2968578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:02:23.564908  166103 system_pods.go:61] "storage-provisioner" [92b20aec-29c2-4256-86be-7f58f66585dd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 12:02:23.564913  166103 system_pods.go:74] duration metric: took 12.089276ms to wait for pod list to return data ...
	I0617 12:02:23.564919  166103 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:02:23.573455  166103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:02:23.573480  166103 node_conditions.go:123] node cpu capacity is 2
	I0617 12:02:23.573492  166103 node_conditions.go:105] duration metric: took 8.568721ms to run NodePressure ...
	I0617 12:02:23.573509  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:23.918292  166103 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 12:02:23.922992  166103 kubeadm.go:733] kubelet initialised
	I0617 12:02:23.923019  166103 kubeadm.go:734] duration metric: took 4.69627ms waiting for restarted kubelet to initialise ...
	I0617 12:02:23.923027  166103 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:23.927615  166103 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.932203  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.932225  166103 pod_ready.go:81] duration metric: took 4.590359ms for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.932233  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.932239  166103 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.936802  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.936825  166103 pod_ready.go:81] duration metric: took 4.579036ms for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.936835  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.936840  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.942877  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.942903  166103 pod_ready.go:81] duration metric: took 6.055748ms for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.942927  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.942935  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.955830  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.955851  166103 pod_ready.go:81] duration metric: took 12.903911ms for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.955861  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.955869  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:24.356654  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-proxy-jn5kp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.356682  166103 pod_ready.go:81] duration metric: took 400.805294ms for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:24.356692  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-proxy-jn5kp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.356699  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:24.765108  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.765133  166103 pod_ready.go:81] duration metric: took 408.42568ms for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:24.765145  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.765152  166103 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:25.156898  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:25.156927  166103 pod_ready.go:81] duration metric: took 391.769275ms for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:25.156939  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:25.156946  166103 pod_ready.go:38] duration metric: took 1.233911476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:25.156968  166103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:02:25.170925  166103 ops.go:34] apiserver oom_adj: -16
	I0617 12:02:25.170963  166103 kubeadm.go:591] duration metric: took 8.590593327s to restartPrimaryControlPlane
	I0617 12:02:25.170976  166103 kubeadm.go:393] duration metric: took 8.652716269s to StartCluster
	I0617 12:02:25.170998  166103 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:25.171111  166103 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:02:25.173919  166103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:25.174286  166103 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:02:25.176186  166103 out.go:177] * Verifying Kubernetes components...
	I0617 12:02:25.174347  166103 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:02:25.174528  166103 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:02:25.177622  166103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:25.177632  166103 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-991309"
	I0617 12:02:25.177670  166103 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-991309"
	W0617 12:02:25.177684  166103 addons.go:243] addon metrics-server should already be in state true
	I0617 12:02:25.177721  166103 host.go:66] Checking if "default-k8s-diff-port-991309" exists ...
	I0617 12:02:25.177622  166103 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-991309"
	I0617 12:02:25.177789  166103 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-991309"
	W0617 12:02:25.177806  166103 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:02:25.177837  166103 host.go:66] Checking if "default-k8s-diff-port-991309" exists ...
	I0617 12:02:25.177628  166103 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-991309"
	I0617 12:02:25.177875  166103 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-991309"
	I0617 12:02:25.178173  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.178202  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.178251  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.178282  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.178299  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.178318  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.198817  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0617 12:02:25.199064  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0617 12:02:25.199513  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I0617 12:02:25.199902  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.199919  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.200633  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.201080  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.201110  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.201270  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.201286  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.201415  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.201427  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.201482  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.201786  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.201845  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.202268  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.202637  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.202663  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.202989  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.203038  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.206439  166103 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-991309"
	W0617 12:02:25.206462  166103 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:02:25.206492  166103 host.go:66] Checking if "default-k8s-diff-port-991309" exists ...
	I0617 12:02:25.206875  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.206921  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.218501  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37189
	I0617 12:02:25.218532  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34089
	I0617 12:02:25.218912  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.218986  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.219410  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.219429  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.219545  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.219561  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.219917  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.219920  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.220110  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.220111  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.221839  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:25.223920  166103 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:02:25.225213  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:02:25.225232  166103 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:02:25.225260  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:25.224029  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:25.228780  166103 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:25.227545  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0617 12:02:25.230084  166103 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:02:25.230100  166103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:02:25.230113  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:25.228465  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.229054  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:25.230179  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:25.229303  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.230215  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.230371  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:25.230542  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:25.230674  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:25.230723  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.230737  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.231150  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.231772  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.231802  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.234036  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.234476  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:25.234494  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.234755  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:25.234919  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:25.235079  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:25.235235  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:25.248352  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46349
	I0617 12:02:25.248851  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.249306  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.249330  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.249681  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.249873  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.251282  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:25.251512  166103 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:02:25.251529  166103 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:02:25.251551  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:25.253963  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.254458  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:25.254484  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.254628  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:25.254941  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:25.255229  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:25.255385  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:25.391207  166103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:02:25.411906  166103 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-991309" to be "Ready" ...
	I0617 12:02:25.476025  166103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:02:25.566470  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:02:25.566500  166103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:02:25.593744  166103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:02:25.620336  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:02:25.620371  166103 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:02:25.700009  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:02:25.700048  166103 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:02:25.769841  166103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:02:25.782207  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:25.782240  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:25.782576  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Closing plugin on server side
	I0617 12:02:25.782597  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:25.782610  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:25.782623  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:25.782632  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:25.782888  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:25.782916  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:25.789639  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:25.789662  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:25.789921  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:25.789941  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.600819  166103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.007014283s)
	I0617 12:02:26.600883  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.600898  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.600902  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.600917  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.601253  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601295  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.601305  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.601325  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601342  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.601353  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.601366  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.601370  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.601571  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Closing plugin on server side
	I0617 12:02:26.601590  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601600  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.601615  166103 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-991309"
	I0617 12:02:26.601626  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601635  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Closing plugin on server side
	I0617 12:02:26.601638  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.604200  166103 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0617 12:02:26.605477  166103 addons.go:510] duration metric: took 1.431148263s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0617 12:02:27.415122  166103 node_ready.go:53] node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.126888  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:23.627274  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.127019  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.627337  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:25.126642  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:25.627064  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:26.126606  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:26.626803  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:27.126825  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:27.626799  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.223344  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:26.225129  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:24.760577  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:24.761063  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:24.761095  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:24.760999  166976 retry.go:31] will retry after 3.793168135s: waiting for machine to come up
	I0617 12:02:28.558153  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.558708  164809 main.go:141] libmachine: (no-preload-152830) Found IP for machine: 192.168.39.173
	I0617 12:02:28.558735  164809 main.go:141] libmachine: (no-preload-152830) Reserving static IP address...
	I0617 12:02:28.558751  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has current primary IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.559214  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "no-preload-152830", mac: "52:54:00:c0:1a:fb", ip: "192.168.39.173"} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.559248  164809 main.go:141] libmachine: (no-preload-152830) DBG | skip adding static IP to network mk-no-preload-152830 - found existing host DHCP lease matching {name: "no-preload-152830", mac: "52:54:00:c0:1a:fb", ip: "192.168.39.173"}
	I0617 12:02:28.559263  164809 main.go:141] libmachine: (no-preload-152830) Reserved static IP address: 192.168.39.173
	I0617 12:02:28.559278  164809 main.go:141] libmachine: (no-preload-152830) Waiting for SSH to be available...
	I0617 12:02:28.559295  164809 main.go:141] libmachine: (no-preload-152830) DBG | Getting to WaitForSSH function...
	I0617 12:02:28.562122  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.562453  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.562482  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.562678  164809 main.go:141] libmachine: (no-preload-152830) DBG | Using SSH client type: external
	I0617 12:02:28.562706  164809 main.go:141] libmachine: (no-preload-152830) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa (-rw-------)
	I0617 12:02:28.562739  164809 main.go:141] libmachine: (no-preload-152830) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.173 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:02:28.562753  164809 main.go:141] libmachine: (no-preload-152830) DBG | About to run SSH command:
	I0617 12:02:28.562770  164809 main.go:141] libmachine: (no-preload-152830) DBG | exit 0
	I0617 12:02:28.687683  164809 main.go:141] libmachine: (no-preload-152830) DBG | SSH cmd err, output: <nil>: 
	I0617 12:02:28.688021  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetConfigRaw
	I0617 12:02:28.688649  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:28.691248  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.691585  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.691609  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.691895  164809 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/config.json ...
	I0617 12:02:28.692109  164809 machine.go:94] provisionDockerMachine start ...
	I0617 12:02:28.692132  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:28.692371  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:28.694371  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.694738  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.694766  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.694942  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:28.695130  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.695309  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.695490  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:28.695695  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:28.695858  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:28.695869  164809 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:02:28.803687  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:02:28.803726  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:02:28.803996  164809 buildroot.go:166] provisioning hostname "no-preload-152830"
	I0617 12:02:28.804031  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:02:28.804333  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:28.806959  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.807395  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.807424  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.807547  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:28.807725  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.807895  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.808057  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:28.808216  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:28.808420  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:28.808436  164809 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-152830 && echo "no-preload-152830" | sudo tee /etc/hostname
	I0617 12:02:28.931222  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-152830
	
	I0617 12:02:28.931259  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:28.934188  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.934536  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.934564  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.934822  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:28.935048  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.935218  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.935353  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:28.935593  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:28.935814  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:28.935837  164809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-152830' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-152830/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-152830' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:02:29.054126  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:02:29.054156  164809 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:02:29.054173  164809 buildroot.go:174] setting up certificates
	I0617 12:02:29.054184  164809 provision.go:84] configureAuth start
	I0617 12:02:29.054195  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:02:29.054490  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:29.057394  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.057797  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.057830  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.057963  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.060191  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.060485  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.060514  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.060633  164809 provision.go:143] copyHostCerts
	I0617 12:02:29.060708  164809 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:02:29.060722  164809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:02:29.060796  164809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:02:29.060963  164809 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:02:29.060978  164809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:02:29.061003  164809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:02:29.061065  164809 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:02:29.061072  164809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:02:29.061090  164809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:02:29.061139  164809 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.no-preload-152830 san=[127.0.0.1 192.168.39.173 localhost minikube no-preload-152830]
	I0617 12:02:29.321179  164809 provision.go:177] copyRemoteCerts
	I0617 12:02:29.321232  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:02:29.321256  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.324217  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.324612  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.324642  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.324836  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.325043  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.325227  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.325386  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:29.410247  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:02:29.435763  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0617 12:02:29.462900  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:02:29.491078  164809 provision.go:87] duration metric: took 436.876068ms to configureAuth
	I0617 12:02:29.491120  164809 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:02:29.491377  164809 config.go:182] Loaded profile config "no-preload-152830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:02:29.491522  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.494581  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.495019  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.495052  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.495245  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.495555  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.495766  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.495897  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.496068  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:29.496275  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:29.496296  164809 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:02:29.774692  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:02:29.774730  164809 machine.go:97] duration metric: took 1.082604724s to provisionDockerMachine
	I0617 12:02:29.774748  164809 start.go:293] postStartSetup for "no-preload-152830" (driver="kvm2")
	I0617 12:02:29.774765  164809 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:02:29.774785  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:29.775181  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:02:29.775220  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.778574  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.778959  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.778988  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.779154  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.779351  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.779575  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.779750  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:29.866959  164809 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:02:29.871319  164809 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:02:29.871348  164809 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:02:29.871425  164809 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:02:29.871535  164809 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:02:29.871648  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:02:29.881995  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:29.907614  164809 start.go:296] duration metric: took 132.84708ms for postStartSetup
	I0617 12:02:29.907669  164809 fix.go:56] duration metric: took 19.690465972s for fixHost
	I0617 12:02:29.907695  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.910226  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.910617  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.910644  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.910811  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.911162  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.911377  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.911571  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.911772  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:29.911961  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:29.911972  164809 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:02:30.021051  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625749.993041026
	
	I0617 12:02:30.021079  164809 fix.go:216] guest clock: 1718625749.993041026
	I0617 12:02:30.021088  164809 fix.go:229] Guest: 2024-06-17 12:02:29.993041026 +0000 UTC Remote: 2024-06-17 12:02:29.907674102 +0000 UTC m=+356.579226401 (delta=85.366924ms)
	I0617 12:02:30.021113  164809 fix.go:200] guest clock delta is within tolerance: 85.366924ms
	I0617 12:02:30.021120  164809 start.go:83] releasing machines lock for "no-preload-152830", held for 19.803953246s
	I0617 12:02:30.021148  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.021403  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:30.024093  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.024600  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:30.024633  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.024830  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.025380  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.025552  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.025623  164809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:02:30.025668  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:30.025767  164809 ssh_runner.go:195] Run: cat /version.json
	I0617 12:02:30.025798  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:30.028656  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.028826  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.029037  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:30.029068  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.029294  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:30.029336  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:30.029366  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.029528  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:30.029536  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:30.029764  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:30.029776  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:30.029957  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:30.029984  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:30.030161  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:30.135901  164809 ssh_runner.go:195] Run: systemctl --version
	I0617 12:02:30.142668  164809 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:02:30.296485  164809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:02:30.302789  164809 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:02:30.302856  164809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:02:30.319775  164809 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:02:30.319793  164809 start.go:494] detecting cgroup driver to use...
	I0617 12:02:30.319894  164809 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:02:30.335498  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:02:30.349389  164809 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:02:30.349427  164809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:02:30.363086  164809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:02:30.377383  164809 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:02:30.499956  164809 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:02:30.644098  164809 docker.go:233] disabling docker service ...
	I0617 12:02:30.644178  164809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:02:30.661490  164809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:02:30.675856  164809 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:02:30.819937  164809 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:02:30.932926  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:02:30.947638  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:02:30.966574  164809 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:02:30.966648  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:30.978339  164809 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:02:30.978416  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:30.989950  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.000644  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.011280  164809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:02:31.022197  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.032780  164809 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.050053  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.062065  164809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:02:31.073296  164809 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:02:31.073368  164809 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:02:31.087733  164809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:02:31.098019  164809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:31.232495  164809 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:02:31.371236  164809 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:02:31.371312  164809 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:02:31.376442  164809 start.go:562] Will wait 60s for crictl version
	I0617 12:02:31.376522  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.380416  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:02:31.426664  164809 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:02:31.426763  164809 ssh_runner.go:195] Run: crio --version
	I0617 12:02:31.456696  164809 ssh_runner.go:195] Run: crio --version
	I0617 12:02:31.487696  164809 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:02:29.416369  166103 node_ready.go:53] node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:31.417357  166103 node_ready.go:53] node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:28.126854  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:28.627278  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:29.126577  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:29.626475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:30.127193  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:30.627229  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:31.126478  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:31.626336  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:32.126398  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:32.627005  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:28.724801  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:30.726589  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:33.225707  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:31.488972  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:31.491812  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:31.492191  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:31.492220  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:31.492411  164809 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 12:02:31.497100  164809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:31.510949  164809 kubeadm.go:877] updating cluster {Name:no-preload-152830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-152830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:02:31.511079  164809 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:02:31.511114  164809 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:02:31.546350  164809 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:02:31.546377  164809 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 12:02:31.546440  164809 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.546452  164809 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.546478  164809 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.546485  164809 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.546513  164809 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.546513  164809 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.546458  164809 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.546569  164809 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0617 12:02:31.548101  164809 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.548123  164809 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.548123  164809 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.548137  164809 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.548101  164809 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.548104  164809 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0617 12:02:31.548103  164809 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.548427  164809 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.714107  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.714819  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0617 12:02:31.715764  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.721844  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.722172  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.739873  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.746705  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.814194  164809 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0617 12:02:31.814235  164809 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.814273  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.849549  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.950803  164809 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0617 12:02:31.950858  164809 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.950907  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.950934  164809 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0617 12:02:31.950959  164809 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.950992  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951005  164809 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0617 12:02:31.951030  164809 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0617 12:02:31.951053  164809 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.951090  164809 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0617 12:02:31.951103  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951113  164809 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.951146  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951053  164809 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.951179  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951217  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.951266  164809 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0617 12:02:31.951289  164809 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.951319  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.967596  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.967802  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:32.018505  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:32.018542  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:32.018623  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:32.018664  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0617 12:02:32.018738  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:32.018755  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0617 12:02:32.026154  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0617 12:02:32.026270  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0617 12:02:32.046161  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0617 12:02:32.046288  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0617 12:02:32.126665  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0617 12:02:32.126755  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0617 12:02:32.126765  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0617 12:02:32.126814  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0617 12:02:32.126829  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0617 12:02:32.126867  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0617 12:02:32.126898  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0617 12:02:32.126911  164809 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0617 12:02:32.126935  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0617 12:02:32.126965  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0617 12:02:32.127008  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0617 12:02:32.127058  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0617 12:02:32.127060  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0617 12:02:32.142790  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0617 12:02:32.142816  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0617 12:02:32.143132  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0617 12:02:32.915885  166103 node_ready.go:49] node "default-k8s-diff-port-991309" has status "Ready":"True"
	I0617 12:02:32.915912  166103 node_ready.go:38] duration metric: took 7.503979113s for node "default-k8s-diff-port-991309" to be "Ready" ...
	I0617 12:02:32.915924  166103 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:32.921198  166103 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:34.927290  166103 pod_ready.go:102] pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:33.126753  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:33.627017  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:34.126558  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:34.626976  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.126410  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.627309  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:36.126958  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:36.626349  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:37.126815  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:37.627332  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.724326  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:37.725145  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:36.125679  164809 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1: (3.998551072s)
	I0617 12:02:36.125727  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0617 12:02:36.125773  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.998809852s)
	I0617 12:02:36.125804  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0617 12:02:36.125838  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0617 12:02:36.125894  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0617 12:02:37.885028  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.759100554s)
	I0617 12:02:37.885054  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0617 12:02:37.885073  164809 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0617 12:02:37.885122  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0617 12:02:37.429419  166103 pod_ready.go:102] pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:39.933476  166103 pod_ready.go:92] pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.933508  166103 pod_ready.go:81] duration metric: took 7.012285571s for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.933521  166103 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.940139  166103 pod_ready.go:92] pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.940162  166103 pod_ready.go:81] duration metric: took 6.633405ms for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.940175  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.945285  166103 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.945305  166103 pod_ready.go:81] duration metric: took 5.12303ms for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.945317  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.950992  166103 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.951021  166103 pod_ready.go:81] duration metric: took 5.6962ms for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.951034  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.955874  166103 pod_ready.go:92] pod "kube-proxy-jn5kp" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.955894  166103 pod_ready.go:81] duration metric: took 4.852842ms for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.955905  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:40.327000  166103 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:40.327035  166103 pod_ready.go:81] duration metric: took 371.121545ms for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:40.327049  166103 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:42.334620  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:38.126868  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:38.627367  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.127148  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.626571  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:40.126379  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:40.626747  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:41.126485  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:41.626372  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:42.126904  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:42.627293  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.727666  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:42.223700  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:39.992863  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.10770953s)
	I0617 12:02:39.992903  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0617 12:02:39.992934  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0617 12:02:39.992989  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0617 12:02:41.851420  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.858400961s)
	I0617 12:02:41.851452  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0617 12:02:41.851508  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0617 12:02:41.851578  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0617 12:02:44.833842  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:46.834443  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:43.127137  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:43.626521  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.127017  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.626824  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:45.126475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:45.626535  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:46.127423  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:46.626605  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:47.127029  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:47.627431  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.224685  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:46.225071  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:44.211669  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.360046418s)
	I0617 12:02:44.211702  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0617 12:02:44.211726  164809 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0617 12:02:44.211795  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0617 12:02:45.162389  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0617 12:02:45.162456  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0617 12:02:45.162542  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0617 12:02:47.414088  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.251500525s)
	I0617 12:02:47.414130  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0617 12:02:47.414164  164809 cache_images.go:123] Successfully loaded all cached images
	I0617 12:02:47.414172  164809 cache_images.go:92] duration metric: took 15.867782566s to LoadCachedImages
	I0617 12:02:47.414195  164809 kubeadm.go:928] updating node { 192.168.39.173 8443 v1.30.1 crio true true} ...
	I0617 12:02:47.414359  164809 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-152830 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-152830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:02:47.414451  164809 ssh_runner.go:195] Run: crio config
	I0617 12:02:47.466472  164809 cni.go:84] Creating CNI manager for ""
	I0617 12:02:47.466493  164809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:47.466503  164809 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:02:47.466531  164809 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.173 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-152830 NodeName:no-preload-152830 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:02:47.466716  164809 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-152830"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.173"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:02:47.466793  164809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:02:47.478163  164809 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:02:47.478255  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:02:47.488014  164809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0617 12:02:47.505143  164809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:02:47.522481  164809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0617 12:02:47.545714  164809 ssh_runner.go:195] Run: grep 192.168.39.173	control-plane.minikube.internal$ /etc/hosts
	I0617 12:02:47.551976  164809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.173	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:47.565374  164809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:47.694699  164809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:02:47.714017  164809 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830 for IP: 192.168.39.173
	I0617 12:02:47.714044  164809 certs.go:194] generating shared ca certs ...
	I0617 12:02:47.714064  164809 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:47.714260  164809 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:02:47.714321  164809 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:02:47.714335  164809 certs.go:256] generating profile certs ...
	I0617 12:02:47.714419  164809 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/client.key
	I0617 12:02:47.714504  164809 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/apiserver.key.d2d5b47b
	I0617 12:02:47.714547  164809 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/proxy-client.key
	I0617 12:02:47.714655  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:02:47.714684  164809 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:02:47.714693  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:02:47.714719  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:02:47.714745  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:02:47.714780  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:02:47.714815  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:47.715578  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:02:47.767301  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:02:47.804542  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:02:47.842670  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:02:47.874533  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0617 12:02:47.909752  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:02:47.940097  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:02:47.965441  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:02:47.990862  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:02:48.015935  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:02:48.041408  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:02:48.066557  164809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:02:48.084630  164809 ssh_runner.go:195] Run: openssl version
	I0617 12:02:48.091098  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:02:48.102447  164809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:02:48.107238  164809 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:02:48.107299  164809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:02:48.113682  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:02:48.124472  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:02:48.135897  164809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:02:48.140859  164809 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:02:48.140915  164809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:02:48.147113  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:02:48.158192  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:02:48.169483  164809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:48.174241  164809 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:48.174294  164809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:48.180093  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:02:48.191082  164809 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:02:48.195770  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:02:48.201743  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:02:48.207452  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:02:48.213492  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:02:48.219435  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:02:48.226202  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:02:48.232291  164809 kubeadm.go:391] StartCluster: {Name:no-preload-152830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-152830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:02:48.232409  164809 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:02:48.232448  164809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:48.272909  164809 cri.go:89] found id: ""
	I0617 12:02:48.272972  164809 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:02:48.284185  164809 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:02:48.284212  164809 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:02:48.284221  164809 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:02:48.284266  164809 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:02:48.294653  164809 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:02:48.296091  164809 kubeconfig.go:125] found "no-preload-152830" server: "https://192.168.39.173:8443"
	I0617 12:02:48.298438  164809 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:02:48.307905  164809 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.173
	I0617 12:02:48.307932  164809 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:02:48.307945  164809 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:02:48.307990  164809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:48.356179  164809 cri.go:89] found id: ""
	I0617 12:02:48.356247  164809 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:02:49.333637  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:51.333927  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:48.127215  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:48.627013  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:49.126439  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:49.626831  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.126521  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.627178  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:51.126830  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:51.627091  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:52.127343  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:52.626635  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:48.724828  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:51.225321  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:48.377824  164809 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:02:48.389213  164809 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:02:48.389236  164809 kubeadm.go:156] found existing configuration files:
	
	I0617 12:02:48.389287  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:02:48.398559  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:02:48.398605  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:02:48.408243  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:02:48.417407  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:02:48.417451  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:02:48.427333  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:02:48.436224  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:02:48.436278  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:02:48.445378  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:02:48.454119  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:02:48.454170  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:02:48.463097  164809 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:02:48.472479  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:48.584018  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.392310  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.599840  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.662845  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.794357  164809 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:49.794459  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.295507  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.794968  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.832967  164809 api_server.go:72] duration metric: took 1.038610813s to wait for apiserver process to appear ...
	I0617 12:02:50.832993  164809 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:02:50.833017  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:50.833494  164809 api_server.go:269] stopped: https://192.168.39.173:8443/healthz: Get "https://192.168.39.173:8443/healthz": dial tcp 192.168.39.173:8443: connect: connection refused
	I0617 12:02:51.333910  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:53.534213  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:53.534246  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:53.534265  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:53.579857  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:53.579887  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:53.833207  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:53.863430  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:53.863485  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:54.333557  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:54.342474  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:54.342507  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:54.834092  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:54.839578  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0617 12:02:54.854075  164809 api_server.go:141] control plane version: v1.30.1
	I0617 12:02:54.854113  164809 api_server.go:131] duration metric: took 4.021112065s to wait for apiserver health ...
	I0617 12:02:54.854124  164809 cni.go:84] Creating CNI manager for ""
	I0617 12:02:54.854133  164809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:54.856029  164809 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:02:53.334898  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:55.834490  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:53.126693  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:53.627110  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:54.126653  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:54.626424  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:55.127113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:55.627373  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:56.126415  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:56.627329  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:57.126797  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:57.627313  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:53.723948  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:56.225000  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:54.857252  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:02:54.914636  164809 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:02:54.961745  164809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:02:54.975140  164809 system_pods.go:59] 8 kube-system pods found
	I0617 12:02:54.975183  164809 system_pods.go:61] "coredns-7db6d8ff4d-7lfns" [83cf7962-1aa7-4de6-9e77-a03dee972ead] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0617 12:02:54.975192  164809 system_pods.go:61] "etcd-no-preload-152830" [27dace2b-9d7d-44e8-8f86-b20ce49c8afa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0617 12:02:54.975202  164809 system_pods.go:61] "kube-apiserver-no-preload-152830" [c102caaf-2289-4171-8b1f-89df4f6edf39] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 12:02:54.975213  164809 system_pods.go:61] "kube-controller-manager-no-preload-152830" [534a8f45-7886-4e12-b728-df686c2f8668] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 12:02:54.975220  164809 system_pods.go:61] "kube-proxy-bblgc" [70fa474e-cb6a-4e31-b978-78b47e9952a8] Running
	I0617 12:02:54.975228  164809 system_pods.go:61] "kube-scheduler-no-preload-152830" [17d696bd-55b3-4080-a63d-944216adf1d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 12:02:54.975240  164809 system_pods.go:61] "metrics-server-569cc877fc-97tqn" [0ce37c88-fd22-4001-96c4-d0f5239c0fd4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:02:54.975253  164809 system_pods.go:61] "storage-provisioner" [61dafb85-965b-4961-b9e1-e3202795caef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 12:02:54.975268  164809 system_pods.go:74] duration metric: took 13.492652ms to wait for pod list to return data ...
	I0617 12:02:54.975279  164809 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:02:54.980820  164809 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:02:54.980842  164809 node_conditions.go:123] node cpu capacity is 2
	I0617 12:02:54.980854  164809 node_conditions.go:105] duration metric: took 5.568037ms to run NodePressure ...
	I0617 12:02:54.980873  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:55.284669  164809 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 12:02:55.289433  164809 kubeadm.go:733] kubelet initialised
	I0617 12:02:55.289453  164809 kubeadm.go:734] duration metric: took 4.759785ms waiting for restarted kubelet to initialise ...
	I0617 12:02:55.289461  164809 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:55.294149  164809 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:55.298081  164809 pod_ready.go:97] node "no-preload-152830" hosting pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.298100  164809 pod_ready.go:81] duration metric: took 3.929974ms for pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:55.298109  164809 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-152830" hosting pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.298116  164809 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:55.302552  164809 pod_ready.go:97] node "no-preload-152830" hosting pod "etcd-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.302572  164809 pod_ready.go:81] duration metric: took 4.444579ms for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:55.302580  164809 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-152830" hosting pod "etcd-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.302585  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:55.306375  164809 pod_ready.go:97] node "no-preload-152830" hosting pod "kube-apiserver-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.306394  164809 pod_ready.go:81] duration metric: took 3.804134ms for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:55.306402  164809 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-152830" hosting pod "kube-apiserver-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.306407  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:57.313002  164809 pod_ready.go:102] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:57.834719  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:00.334129  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:58.126744  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:58.627050  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:59.127300  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:59.626694  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:00.127092  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:00.127182  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:00.166116  165698 cri.go:89] found id: ""
	I0617 12:03:00.166145  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.166153  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:00.166159  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:00.166208  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:00.200990  165698 cri.go:89] found id: ""
	I0617 12:03:00.201020  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.201029  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:00.201034  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:00.201086  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:00.236394  165698 cri.go:89] found id: ""
	I0617 12:03:00.236422  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.236430  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:00.236438  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:00.236496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:00.274257  165698 cri.go:89] found id: ""
	I0617 12:03:00.274285  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.274293  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:00.274299  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:00.274350  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:00.307425  165698 cri.go:89] found id: ""
	I0617 12:03:00.307452  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.307481  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:00.307490  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:00.307557  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:00.343420  165698 cri.go:89] found id: ""
	I0617 12:03:00.343446  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.343472  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:00.343480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:00.343541  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:00.378301  165698 cri.go:89] found id: ""
	I0617 12:03:00.378325  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.378333  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:00.378338  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:00.378383  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:00.414985  165698 cri.go:89] found id: ""
	I0617 12:03:00.415011  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.415018  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:00.415033  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:00.415090  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:00.468230  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:00.468262  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:00.481970  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:00.481998  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:00.612881  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:00.612911  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:00.612929  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:00.676110  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:00.676145  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:02:58.725617  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:01.225227  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:59.818063  164809 pod_ready.go:102] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:02.312898  164809 pod_ready.go:102] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:03.313300  164809 pod_ready.go:92] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:03:03.313332  164809 pod_ready.go:81] duration metric: took 8.006915719s for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:03.313347  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bblgc" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:03.319094  164809 pod_ready.go:92] pod "kube-proxy-bblgc" in "kube-system" namespace has status "Ready":"True"
	I0617 12:03:03.319116  164809 pod_ready.go:81] duration metric: took 5.762584ms for pod "kube-proxy-bblgc" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:03.319137  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:02.833031  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:04.834158  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:07.334894  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:03.216960  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:03.231208  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:03.231277  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:03.267056  165698 cri.go:89] found id: ""
	I0617 12:03:03.267088  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.267096  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:03.267103  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:03.267152  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:03.302797  165698 cri.go:89] found id: ""
	I0617 12:03:03.302832  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.302844  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:03.302852  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:03.302905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:03.343401  165698 cri.go:89] found id: ""
	I0617 12:03:03.343435  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.343445  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:03.343465  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:03.343530  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:03.380841  165698 cri.go:89] found id: ""
	I0617 12:03:03.380871  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.380883  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:03.380890  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:03.380951  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:03.420098  165698 cri.go:89] found id: ""
	I0617 12:03:03.420130  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.420142  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:03.420150  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:03.420213  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:03.458476  165698 cri.go:89] found id: ""
	I0617 12:03:03.458506  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.458515  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:03.458521  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:03.458586  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:03.497127  165698 cri.go:89] found id: ""
	I0617 12:03:03.497156  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.497164  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:03.497170  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:03.497217  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:03.538759  165698 cri.go:89] found id: ""
	I0617 12:03:03.538794  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.538806  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:03.538825  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:03.538841  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:03.584701  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:03.584743  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:03.636981  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:03.637030  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:03.670032  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:03.670077  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:03.757012  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:03.757038  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:03.757056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:06.327680  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:06.341998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:06.342068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:06.383353  165698 cri.go:89] found id: ""
	I0617 12:03:06.383385  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.383394  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:06.383400  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:06.383448  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:06.418806  165698 cri.go:89] found id: ""
	I0617 12:03:06.418850  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.418862  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:06.418870  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:06.418945  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:06.458151  165698 cri.go:89] found id: ""
	I0617 12:03:06.458192  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.458204  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:06.458219  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:06.458289  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:06.496607  165698 cri.go:89] found id: ""
	I0617 12:03:06.496637  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.496645  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:06.496651  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:06.496703  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:06.534900  165698 cri.go:89] found id: ""
	I0617 12:03:06.534938  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.534951  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:06.534959  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:06.535017  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:06.572388  165698 cri.go:89] found id: ""
	I0617 12:03:06.572413  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.572422  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:06.572428  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:06.572496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:06.608072  165698 cri.go:89] found id: ""
	I0617 12:03:06.608104  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.608115  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:06.608121  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:06.608175  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:06.647727  165698 cri.go:89] found id: ""
	I0617 12:03:06.647760  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.647772  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:06.647784  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:06.647800  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:06.720887  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:06.720919  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:06.761128  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:06.761153  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:06.815524  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:06.815557  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:06.830275  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:06.830304  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:06.907861  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:03.725650  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:06.225601  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:05.327062  164809 pod_ready.go:102] pod "kube-scheduler-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:07.325033  164809 pod_ready.go:92] pod "kube-scheduler-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:03:07.325061  164809 pod_ready.go:81] duration metric: took 4.005914462s for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:07.325072  164809 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:09.835374  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:12.334481  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:09.408117  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:09.420916  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:09.420978  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:09.453830  165698 cri.go:89] found id: ""
	I0617 12:03:09.453860  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.453870  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:09.453878  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:09.453937  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:09.492721  165698 cri.go:89] found id: ""
	I0617 12:03:09.492756  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.492766  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:09.492775  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:09.492849  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:09.530956  165698 cri.go:89] found id: ""
	I0617 12:03:09.530984  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.530995  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:09.531001  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:09.531067  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:09.571534  165698 cri.go:89] found id: ""
	I0617 12:03:09.571564  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.571576  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:09.571584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:09.571646  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:09.609740  165698 cri.go:89] found id: ""
	I0617 12:03:09.609776  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.609788  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:09.609797  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:09.609864  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:09.649958  165698 cri.go:89] found id: ""
	I0617 12:03:09.649998  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.650010  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:09.650020  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:09.650087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:09.706495  165698 cri.go:89] found id: ""
	I0617 12:03:09.706532  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.706544  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:09.706553  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:09.706638  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:09.742513  165698 cri.go:89] found id: ""
	I0617 12:03:09.742541  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.742549  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:09.742559  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:09.742571  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:09.756470  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:09.756502  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:09.840878  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:09.840897  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:09.840913  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:09.922329  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:09.922370  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:09.967536  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:09.967573  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:12.521031  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:12.534507  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:12.534595  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:12.569895  165698 cri.go:89] found id: ""
	I0617 12:03:12.569930  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.569942  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:12.569950  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:12.570005  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:12.606857  165698 cri.go:89] found id: ""
	I0617 12:03:12.606888  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.606900  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:12.606922  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:12.606998  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:12.640781  165698 cri.go:89] found id: ""
	I0617 12:03:12.640807  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.640818  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:12.640826  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:12.640910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:12.674097  165698 cri.go:89] found id: ""
	I0617 12:03:12.674124  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.674134  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:12.674142  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:12.674201  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:12.708662  165698 cri.go:89] found id: ""
	I0617 12:03:12.708689  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.708699  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:12.708707  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:12.708791  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:12.744891  165698 cri.go:89] found id: ""
	I0617 12:03:12.744927  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.744938  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:12.744947  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:12.745010  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:12.778440  165698 cri.go:89] found id: ""
	I0617 12:03:12.778466  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.778474  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:12.778480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:12.778528  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:12.814733  165698 cri.go:89] found id: ""
	I0617 12:03:12.814762  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.814770  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:12.814780  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:12.814820  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:12.887741  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:12.887762  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:12.887775  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:12.968439  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:12.968476  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:08.725485  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:11.224357  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:09.331004  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:11.331666  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:13.332269  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:14.335086  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:16.836397  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:13.008926  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:13.008955  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:13.060432  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:13.060468  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:15.575450  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:15.589178  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:15.589244  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:15.625554  165698 cri.go:89] found id: ""
	I0617 12:03:15.625589  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.625601  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:15.625608  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:15.625668  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:15.659023  165698 cri.go:89] found id: ""
	I0617 12:03:15.659054  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.659066  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:15.659074  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:15.659138  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:15.693777  165698 cri.go:89] found id: ""
	I0617 12:03:15.693803  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.693811  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:15.693817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:15.693875  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:15.729098  165698 cri.go:89] found id: ""
	I0617 12:03:15.729133  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.729141  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:15.729147  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:15.729194  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:15.762639  165698 cri.go:89] found id: ""
	I0617 12:03:15.762668  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.762679  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:15.762687  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:15.762744  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:15.797446  165698 cri.go:89] found id: ""
	I0617 12:03:15.797475  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.797484  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:15.797489  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:15.797537  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:15.832464  165698 cri.go:89] found id: ""
	I0617 12:03:15.832503  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.832513  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:15.832521  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:15.832579  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:15.867868  165698 cri.go:89] found id: ""
	I0617 12:03:15.867898  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.867906  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:15.867916  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:15.867928  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:15.882151  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:15.882181  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:15.946642  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:15.946666  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:15.946682  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:16.027062  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:16.027098  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:16.082704  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:16.082735  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:13.725854  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:16.225670  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:15.333470  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:17.832368  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:19.334102  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:21.334529  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:18.651554  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:18.665096  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:18.665166  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:18.703099  165698 cri.go:89] found id: ""
	I0617 12:03:18.703127  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.703138  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:18.703147  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:18.703210  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:18.737945  165698 cri.go:89] found id: ""
	I0617 12:03:18.737985  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.737997  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:18.738005  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:18.738079  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:18.777145  165698 cri.go:89] found id: ""
	I0617 12:03:18.777172  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.777181  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:18.777187  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:18.777255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:18.813171  165698 cri.go:89] found id: ""
	I0617 12:03:18.813198  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.813207  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:18.813213  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:18.813270  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:18.854459  165698 cri.go:89] found id: ""
	I0617 12:03:18.854490  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.854501  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:18.854510  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:18.854607  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:18.893668  165698 cri.go:89] found id: ""
	I0617 12:03:18.893703  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.893712  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:18.893718  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:18.893796  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:18.928919  165698 cri.go:89] found id: ""
	I0617 12:03:18.928971  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.928983  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:18.928993  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:18.929068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:18.965770  165698 cri.go:89] found id: ""
	I0617 12:03:18.965800  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.965808  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:18.965817  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:18.965829  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:19.020348  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:19.020392  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:19.034815  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:19.034845  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:19.109617  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:19.109643  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:19.109660  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:19.186843  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:19.186890  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:21.732720  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:21.747032  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:21.747113  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:21.789962  165698 cri.go:89] found id: ""
	I0617 12:03:21.789991  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.789999  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:21.790011  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:21.790066  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:21.833865  165698 cri.go:89] found id: ""
	I0617 12:03:21.833903  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.833913  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:21.833921  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:21.833985  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:21.903891  165698 cri.go:89] found id: ""
	I0617 12:03:21.903929  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.903941  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:21.903950  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:21.904020  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:21.941369  165698 cri.go:89] found id: ""
	I0617 12:03:21.941396  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.941407  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:21.941415  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:21.941473  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:21.977767  165698 cri.go:89] found id: ""
	I0617 12:03:21.977797  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.977808  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:21.977817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:21.977880  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:22.016422  165698 cri.go:89] found id: ""
	I0617 12:03:22.016450  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.016463  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:22.016471  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:22.016536  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:22.056871  165698 cri.go:89] found id: ""
	I0617 12:03:22.056904  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.056914  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:22.056922  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:22.056982  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:22.093244  165698 cri.go:89] found id: ""
	I0617 12:03:22.093288  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.093300  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:22.093313  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:22.093331  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:22.144722  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:22.144756  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:22.159047  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:22.159084  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:22.232077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:22.232100  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:22.232112  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:22.308241  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:22.308276  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:18.724648  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:21.224616  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:19.832543  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:21.838952  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:23.834640  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:26.336770  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:24.851740  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:24.866597  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:24.866659  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:24.902847  165698 cri.go:89] found id: ""
	I0617 12:03:24.902879  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.902892  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:24.902900  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:24.902973  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:24.940042  165698 cri.go:89] found id: ""
	I0617 12:03:24.940079  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.940088  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:24.940094  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:24.940150  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:24.975160  165698 cri.go:89] found id: ""
	I0617 12:03:24.975190  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.975202  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:24.975211  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:24.975280  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:25.012618  165698 cri.go:89] found id: ""
	I0617 12:03:25.012649  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.012657  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:25.012663  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:25.012712  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:25.051166  165698 cri.go:89] found id: ""
	I0617 12:03:25.051210  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.051223  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:25.051230  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:25.051309  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:25.090112  165698 cri.go:89] found id: ""
	I0617 12:03:25.090144  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.090156  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:25.090164  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:25.090230  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:25.133258  165698 cri.go:89] found id: ""
	I0617 12:03:25.133285  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.133294  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:25.133301  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:25.133366  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:25.177445  165698 cri.go:89] found id: ""
	I0617 12:03:25.177473  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.177481  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:25.177490  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:25.177505  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:25.250685  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:25.250710  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:25.250727  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:25.335554  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:25.335586  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:25.377058  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:25.377093  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:25.431425  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:25.431471  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:27.945063  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:27.959396  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:27.959469  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:23.725126  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:26.224114  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:28.224895  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:23.840550  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:26.333142  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:28.334577  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:28.337133  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:30.834142  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:27.994554  165698 cri.go:89] found id: ""
	I0617 12:03:27.994582  165698 logs.go:276] 0 containers: []
	W0617 12:03:27.994591  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:27.994598  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:27.994660  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:28.030168  165698 cri.go:89] found id: ""
	I0617 12:03:28.030200  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.030208  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:28.030215  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:28.030263  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:28.066213  165698 cri.go:89] found id: ""
	I0617 12:03:28.066244  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.066255  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:28.066261  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:28.066322  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:28.102855  165698 cri.go:89] found id: ""
	I0617 12:03:28.102880  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.102888  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:28.102894  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:28.102942  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:28.138698  165698 cri.go:89] found id: ""
	I0617 12:03:28.138734  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.138748  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:28.138755  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:28.138815  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:28.173114  165698 cri.go:89] found id: ""
	I0617 12:03:28.173140  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.173148  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:28.173154  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:28.173213  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:28.208901  165698 cri.go:89] found id: ""
	I0617 12:03:28.208936  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.208947  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:28.208955  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:28.209016  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:28.244634  165698 cri.go:89] found id: ""
	I0617 12:03:28.244667  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.244678  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:28.244687  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:28.244699  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:28.300303  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:28.300336  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:28.314227  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:28.314272  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:28.394322  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:28.394350  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:28.394367  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:28.483381  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:28.483413  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:31.026433  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:31.040820  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:31.040888  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:31.086409  165698 cri.go:89] found id: ""
	I0617 12:03:31.086440  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.086453  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:31.086461  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:31.086548  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:31.122810  165698 cri.go:89] found id: ""
	I0617 12:03:31.122836  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.122843  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:31.122849  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:31.122910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:31.157634  165698 cri.go:89] found id: ""
	I0617 12:03:31.157669  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.157680  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:31.157687  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:31.157750  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:31.191498  165698 cri.go:89] found id: ""
	I0617 12:03:31.191529  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.191541  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:31.191549  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:31.191619  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:31.225575  165698 cri.go:89] found id: ""
	I0617 12:03:31.225599  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.225609  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:31.225616  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:31.225670  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:31.269780  165698 cri.go:89] found id: ""
	I0617 12:03:31.269810  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.269819  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:31.269825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:31.269874  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:31.307689  165698 cri.go:89] found id: ""
	I0617 12:03:31.307717  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.307726  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:31.307733  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:31.307789  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:31.344160  165698 cri.go:89] found id: ""
	I0617 12:03:31.344190  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.344200  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:31.344210  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:31.344223  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:31.397627  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:31.397667  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:31.411316  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:31.411347  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:31.486258  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:31.486280  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:31.486297  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:31.568067  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:31.568106  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:30.725183  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:33.224294  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:30.834377  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:33.333070  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:33.335067  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:35.335626  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:37.336117  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:34.111424  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:34.127178  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:34.127255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:34.165900  165698 cri.go:89] found id: ""
	I0617 12:03:34.165936  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.165947  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:34.165955  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:34.166042  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:34.203556  165698 cri.go:89] found id: ""
	I0617 12:03:34.203588  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.203597  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:34.203606  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:34.203659  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:34.243418  165698 cri.go:89] found id: ""
	I0617 12:03:34.243478  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.243490  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:34.243499  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:34.243661  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:34.281542  165698 cri.go:89] found id: ""
	I0617 12:03:34.281569  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.281577  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:34.281582  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:34.281635  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:34.316304  165698 cri.go:89] found id: ""
	I0617 12:03:34.316333  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.316341  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:34.316347  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:34.316403  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:34.357416  165698 cri.go:89] found id: ""
	I0617 12:03:34.357455  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.357467  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:34.357476  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:34.357547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:34.392069  165698 cri.go:89] found id: ""
	I0617 12:03:34.392101  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.392112  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:34.392120  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:34.392185  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:34.427203  165698 cri.go:89] found id: ""
	I0617 12:03:34.427235  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.427247  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:34.427258  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:34.427317  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:34.441346  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:34.441375  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:34.519306  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:34.519331  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:34.519349  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:34.598802  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:34.598843  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:34.637521  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:34.637554  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:37.191259  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:37.205882  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:37.205947  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:37.242175  165698 cri.go:89] found id: ""
	I0617 12:03:37.242202  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.242209  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:37.242215  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:37.242278  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:37.278004  165698 cri.go:89] found id: ""
	I0617 12:03:37.278029  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.278037  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:37.278043  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:37.278091  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:37.322148  165698 cri.go:89] found id: ""
	I0617 12:03:37.322179  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.322190  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:37.322198  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:37.322259  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:37.358612  165698 cri.go:89] found id: ""
	I0617 12:03:37.358638  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.358649  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:37.358657  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:37.358718  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:37.393070  165698 cri.go:89] found id: ""
	I0617 12:03:37.393104  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.393115  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:37.393123  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:37.393187  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:37.429420  165698 cri.go:89] found id: ""
	I0617 12:03:37.429452  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.429465  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:37.429475  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:37.429541  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:37.464485  165698 cri.go:89] found id: ""
	I0617 12:03:37.464509  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.464518  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:37.464523  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:37.464584  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:37.501283  165698 cri.go:89] found id: ""
	I0617 12:03:37.501308  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.501316  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:37.501326  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:37.501338  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:37.552848  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:37.552889  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:37.566715  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:37.566746  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:37.643560  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:37.643584  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:37.643601  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:37.722895  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:37.722935  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:35.225442  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:37.225962  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:35.836693  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:38.332297  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:39.834655  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:42.333686  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:40.268199  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:40.281832  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:40.281905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:40.317094  165698 cri.go:89] found id: ""
	I0617 12:03:40.317137  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.317150  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:40.317159  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:40.317229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:40.355786  165698 cri.go:89] found id: ""
	I0617 12:03:40.355819  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.355829  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:40.355836  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:40.355903  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:40.394282  165698 cri.go:89] found id: ""
	I0617 12:03:40.394312  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.394323  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:40.394332  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:40.394388  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:40.433773  165698 cri.go:89] found id: ""
	I0617 12:03:40.433806  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.433817  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:40.433825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:40.433875  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:40.469937  165698 cri.go:89] found id: ""
	I0617 12:03:40.469973  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.469985  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:40.469998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:40.470067  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:40.503565  165698 cri.go:89] found id: ""
	I0617 12:03:40.503590  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.503599  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:40.503605  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:40.503654  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:40.538349  165698 cri.go:89] found id: ""
	I0617 12:03:40.538383  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.538394  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:40.538402  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:40.538461  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:40.576036  165698 cri.go:89] found id: ""
	I0617 12:03:40.576066  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.576075  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:40.576085  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:40.576100  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:40.617804  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:40.617833  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:40.668126  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:40.668162  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:40.682618  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:40.682655  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:40.759597  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:40.759619  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:40.759638  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:39.725534  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:42.223320  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:40.336855  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:42.832597  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:44.334430  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:46.835809  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:43.343404  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:43.357886  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:43.357953  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:43.398262  165698 cri.go:89] found id: ""
	I0617 12:03:43.398290  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.398301  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:43.398310  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:43.398370  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:43.432241  165698 cri.go:89] found id: ""
	I0617 12:03:43.432272  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.432280  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:43.432289  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:43.432348  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:43.466210  165698 cri.go:89] found id: ""
	I0617 12:03:43.466234  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.466241  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:43.466247  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:43.466294  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:43.501677  165698 cri.go:89] found id: ""
	I0617 12:03:43.501711  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.501723  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:43.501731  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:43.501793  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:43.541826  165698 cri.go:89] found id: ""
	I0617 12:03:43.541860  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.541870  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:43.541876  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:43.541941  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:43.576940  165698 cri.go:89] found id: ""
	I0617 12:03:43.576962  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.576970  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:43.576975  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:43.577022  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:43.612592  165698 cri.go:89] found id: ""
	I0617 12:03:43.612627  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.612635  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:43.612643  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:43.612694  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:43.647141  165698 cri.go:89] found id: ""
	I0617 12:03:43.647176  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.647188  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:43.647202  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:43.647220  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:43.698248  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:43.698283  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:43.711686  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:43.711714  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:43.787077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:43.787101  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:43.787115  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:43.861417  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:43.861455  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:46.402594  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:46.417108  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:46.417185  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:46.453910  165698 cri.go:89] found id: ""
	I0617 12:03:46.453941  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.453952  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:46.453960  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:46.454020  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:46.487239  165698 cri.go:89] found id: ""
	I0617 12:03:46.487268  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.487280  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:46.487288  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:46.487353  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:46.521824  165698 cri.go:89] found id: ""
	I0617 12:03:46.521850  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.521859  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:46.521866  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:46.521929  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:46.557247  165698 cri.go:89] found id: ""
	I0617 12:03:46.557274  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.557282  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:46.557289  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:46.557350  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:46.600354  165698 cri.go:89] found id: ""
	I0617 12:03:46.600383  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.600393  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:46.600402  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:46.600477  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:46.638153  165698 cri.go:89] found id: ""
	I0617 12:03:46.638180  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.638189  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:46.638197  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:46.638255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:46.672636  165698 cri.go:89] found id: ""
	I0617 12:03:46.672661  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.672669  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:46.672675  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:46.672721  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:46.706431  165698 cri.go:89] found id: ""
	I0617 12:03:46.706468  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.706481  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:46.706493  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:46.706509  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:46.720796  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:46.720842  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:46.801343  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:46.801365  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:46.801379  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:46.883651  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:46.883696  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:46.928594  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:46.928630  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:44.224037  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:46.224076  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:48.224472  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:45.332811  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:47.832461  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:49.334678  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:51.833994  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:49.480413  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:49.495558  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:49.495656  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:49.533281  165698 cri.go:89] found id: ""
	I0617 12:03:49.533313  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.533323  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:49.533330  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:49.533396  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:49.573430  165698 cri.go:89] found id: ""
	I0617 12:03:49.573457  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.573465  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:49.573472  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:49.573532  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:49.608669  165698 cri.go:89] found id: ""
	I0617 12:03:49.608697  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.608705  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:49.608711  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:49.608767  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:49.643411  165698 cri.go:89] found id: ""
	I0617 12:03:49.643449  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.643481  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:49.643490  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:49.643557  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:49.680039  165698 cri.go:89] found id: ""
	I0617 12:03:49.680071  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.680082  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:49.680090  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:49.680148  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:49.717169  165698 cri.go:89] found id: ""
	I0617 12:03:49.717195  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.717203  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:49.717209  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:49.717262  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:49.754585  165698 cri.go:89] found id: ""
	I0617 12:03:49.754615  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.754625  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:49.754633  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:49.754697  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:49.796040  165698 cri.go:89] found id: ""
	I0617 12:03:49.796074  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.796085  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:49.796097  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:49.796112  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:49.873496  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:49.873530  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:49.873547  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:49.961883  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:49.961925  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:50.002975  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:50.003004  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:50.054185  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:50.054224  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:52.568557  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:52.584264  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:52.584337  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:52.622474  165698 cri.go:89] found id: ""
	I0617 12:03:52.622501  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.622509  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:52.622516  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:52.622566  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:52.661012  165698 cri.go:89] found id: ""
	I0617 12:03:52.661045  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.661057  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:52.661066  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:52.661133  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:52.700950  165698 cri.go:89] found id: ""
	I0617 12:03:52.700986  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.700998  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:52.701006  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:52.701075  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:52.735663  165698 cri.go:89] found id: ""
	I0617 12:03:52.735689  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.735696  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:52.735702  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:52.735768  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:52.776540  165698 cri.go:89] found id: ""
	I0617 12:03:52.776568  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.776580  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:52.776589  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:52.776642  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:52.812439  165698 cri.go:89] found id: ""
	I0617 12:03:52.812474  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.812493  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:52.812503  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:52.812567  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:52.849233  165698 cri.go:89] found id: ""
	I0617 12:03:52.849263  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.849273  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:52.849281  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:52.849343  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:52.885365  165698 cri.go:89] found id: ""
	I0617 12:03:52.885395  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.885406  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:52.885419  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:52.885434  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:52.941521  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:52.941553  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:52.955958  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:52.955997  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:03:50.224702  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:52.724247  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:50.332871  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:52.832386  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:53.834382  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:55.834813  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	W0617 12:03:53.029254  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:53.029278  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:53.029291  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:53.104391  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:53.104425  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:55.648578  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:55.662143  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:55.662205  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:55.697623  165698 cri.go:89] found id: ""
	I0617 12:03:55.697662  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.697674  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:55.697682  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:55.697751  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:55.734132  165698 cri.go:89] found id: ""
	I0617 12:03:55.734171  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.734184  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:55.734192  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:55.734265  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:55.774178  165698 cri.go:89] found id: ""
	I0617 12:03:55.774212  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.774222  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:55.774231  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:55.774296  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:55.816427  165698 cri.go:89] found id: ""
	I0617 12:03:55.816460  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.816471  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:55.816480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:55.816546  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:55.860413  165698 cri.go:89] found id: ""
	I0617 12:03:55.860446  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.860457  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:55.860465  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:55.860532  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:55.897577  165698 cri.go:89] found id: ""
	I0617 12:03:55.897612  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.897622  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:55.897629  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:55.897682  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:55.934163  165698 cri.go:89] found id: ""
	I0617 12:03:55.934200  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.934212  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:55.934220  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:55.934291  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:55.972781  165698 cri.go:89] found id: ""
	I0617 12:03:55.972827  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.972840  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:55.972852  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:55.972867  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:56.027292  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:56.027332  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:56.042304  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:56.042336  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:56.115129  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:56.115159  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:56.115176  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:56.194161  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:56.194200  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:54.728169  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:57.225361  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:54.837170  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:57.333566  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:58.335846  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:00.833987  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:58.734681  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:58.748467  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:58.748534  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:58.786191  165698 cri.go:89] found id: ""
	I0617 12:03:58.786221  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.786232  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:58.786239  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:58.786302  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:58.822076  165698 cri.go:89] found id: ""
	I0617 12:03:58.822103  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.822125  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:58.822134  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:58.822199  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:58.858830  165698 cri.go:89] found id: ""
	I0617 12:03:58.858859  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.858867  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:58.858873  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:58.858927  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:58.898802  165698 cri.go:89] found id: ""
	I0617 12:03:58.898830  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.898838  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:58.898844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:58.898891  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:58.933234  165698 cri.go:89] found id: ""
	I0617 12:03:58.933269  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.933281  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:58.933289  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:58.933355  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:58.973719  165698 cri.go:89] found id: ""
	I0617 12:03:58.973753  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.973766  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:58.973773  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:58.973847  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:59.010671  165698 cri.go:89] found id: ""
	I0617 12:03:59.010722  165698 logs.go:276] 0 containers: []
	W0617 12:03:59.010734  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:59.010741  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:59.010805  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:59.047318  165698 cri.go:89] found id: ""
	I0617 12:03:59.047347  165698 logs.go:276] 0 containers: []
	W0617 12:03:59.047359  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:59.047372  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:59.047389  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:59.097778  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:59.097815  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:59.111615  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:59.111646  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:59.193172  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:59.193195  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:59.193207  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:59.268147  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:59.268182  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:01.807585  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:01.821634  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:01.821694  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:01.857610  165698 cri.go:89] found id: ""
	I0617 12:04:01.857637  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.857647  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:01.857654  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:01.857710  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:01.893229  165698 cri.go:89] found id: ""
	I0617 12:04:01.893253  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.893261  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:01.893267  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:01.893324  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:01.926916  165698 cri.go:89] found id: ""
	I0617 12:04:01.926940  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.926950  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:01.926958  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:01.927017  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:01.961913  165698 cri.go:89] found id: ""
	I0617 12:04:01.961946  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.961957  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:01.961967  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:01.962045  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:01.997084  165698 cri.go:89] found id: ""
	I0617 12:04:01.997111  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.997119  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:01.997125  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:01.997173  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:02.034640  165698 cri.go:89] found id: ""
	I0617 12:04:02.034666  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.034674  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:02.034680  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:02.034744  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:02.085868  165698 cri.go:89] found id: ""
	I0617 12:04:02.085910  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.085920  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:02.085928  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:02.085983  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:02.152460  165698 cri.go:89] found id: ""
	I0617 12:04:02.152487  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.152499  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:02.152513  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:02.152528  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:02.205297  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:02.205344  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:02.222312  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:02.222348  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:02.299934  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:02.299959  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:02.299977  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:02.384008  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:02.384056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:59.724730  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:02.227215  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:59.833621  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:01.833799  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:02.834076  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:04.836418  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:07.335024  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:04.926889  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:04.940643  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:04.940722  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:04.976246  165698 cri.go:89] found id: ""
	I0617 12:04:04.976275  165698 logs.go:276] 0 containers: []
	W0617 12:04:04.976283  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:04.976289  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:04.976338  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:05.015864  165698 cri.go:89] found id: ""
	I0617 12:04:05.015900  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.015913  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:05.015921  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:05.015985  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:05.054051  165698 cri.go:89] found id: ""
	I0617 12:04:05.054086  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.054099  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:05.054112  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:05.054177  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:05.090320  165698 cri.go:89] found id: ""
	I0617 12:04:05.090358  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.090371  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:05.090380  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:05.090438  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:05.126963  165698 cri.go:89] found id: ""
	I0617 12:04:05.126998  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.127008  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:05.127015  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:05.127087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:05.162565  165698 cri.go:89] found id: ""
	I0617 12:04:05.162600  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.162611  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:05.162620  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:05.162674  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:05.195706  165698 cri.go:89] found id: ""
	I0617 12:04:05.195743  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.195752  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:05.195758  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:05.195826  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:05.236961  165698 cri.go:89] found id: ""
	I0617 12:04:05.236995  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.237006  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:05.237016  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:05.237034  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:05.252754  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:05.252783  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:05.327832  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:05.327870  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:05.327886  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:05.410220  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:05.410271  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:05.451291  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:05.451324  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:04.725172  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:07.223627  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:04.332177  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:06.831712  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:09.834563  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:12.334095  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:08.003058  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:08.016611  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:08.016670  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:08.052947  165698 cri.go:89] found id: ""
	I0617 12:04:08.052984  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.052996  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:08.053004  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:08.053057  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:08.086668  165698 cri.go:89] found id: ""
	I0617 12:04:08.086695  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.086704  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:08.086711  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:08.086773  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:08.127708  165698 cri.go:89] found id: ""
	I0617 12:04:08.127738  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.127746  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:08.127752  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:08.127814  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:08.162930  165698 cri.go:89] found id: ""
	I0617 12:04:08.162959  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.162966  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:08.162973  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:08.163026  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:08.196757  165698 cri.go:89] found id: ""
	I0617 12:04:08.196782  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.196791  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:08.196797  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:08.196851  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:08.229976  165698 cri.go:89] found id: ""
	I0617 12:04:08.230006  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.230016  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:08.230022  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:08.230083  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:08.265969  165698 cri.go:89] found id: ""
	I0617 12:04:08.266000  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.266007  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:08.266013  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:08.266071  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:08.299690  165698 cri.go:89] found id: ""
	I0617 12:04:08.299717  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.299728  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:08.299741  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:08.299761  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:08.353399  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:08.353429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:08.366713  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:08.366739  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:08.442727  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:08.442768  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:08.442786  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:08.527832  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:08.527875  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:11.073616  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:11.087085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:11.087172  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:11.121706  165698 cri.go:89] found id: ""
	I0617 12:04:11.121745  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.121756  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:11.121765  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:11.121839  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:11.157601  165698 cri.go:89] found id: ""
	I0617 12:04:11.157637  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.157648  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:11.157657  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:11.157719  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:11.191929  165698 cri.go:89] found id: ""
	I0617 12:04:11.191963  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.191975  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:11.191983  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:11.192045  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:11.228391  165698 cri.go:89] found id: ""
	I0617 12:04:11.228416  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.228429  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:11.228437  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:11.228497  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:11.261880  165698 cri.go:89] found id: ""
	I0617 12:04:11.261911  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.261924  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:11.261932  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:11.261998  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:11.294615  165698 cri.go:89] found id: ""
	I0617 12:04:11.294663  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.294676  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:11.294684  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:11.294745  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:11.332813  165698 cri.go:89] found id: ""
	I0617 12:04:11.332840  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.332847  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:11.332854  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:11.332911  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:11.369032  165698 cri.go:89] found id: ""
	I0617 12:04:11.369060  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.369068  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:11.369078  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:11.369090  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:11.422522  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:11.422555  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:11.436961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:11.436990  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:11.508679  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:11.508700  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:11.508713  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:11.586574  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:11.586610  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:09.224727  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:11.225763  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:09.330868  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:11.332256  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:14.335171  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:16.836514  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:14.127034  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:14.143228  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:14.143306  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:14.178368  165698 cri.go:89] found id: ""
	I0617 12:04:14.178396  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.178405  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:14.178410  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:14.178459  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:14.209971  165698 cri.go:89] found id: ""
	I0617 12:04:14.210001  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.210010  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:14.210015  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:14.210065  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:14.244888  165698 cri.go:89] found id: ""
	I0617 12:04:14.244922  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.244933  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:14.244940  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:14.244999  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:14.277875  165698 cri.go:89] found id: ""
	I0617 12:04:14.277904  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.277914  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:14.277922  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:14.277983  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:14.312698  165698 cri.go:89] found id: ""
	I0617 12:04:14.312724  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.312733  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:14.312739  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:14.312789  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:14.350952  165698 cri.go:89] found id: ""
	I0617 12:04:14.350977  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.350987  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:14.350993  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:14.351056  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:14.389211  165698 cri.go:89] found id: ""
	I0617 12:04:14.389235  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.389243  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:14.389250  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:14.389297  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:14.426171  165698 cri.go:89] found id: ""
	I0617 12:04:14.426200  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.426211  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:14.426224  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:14.426240  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:14.500403  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:14.500430  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:14.500446  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:14.588041  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:14.588078  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:14.631948  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:14.631987  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:14.681859  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:14.681895  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:17.198754  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:17.212612  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:17.212679  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:17.251011  165698 cri.go:89] found id: ""
	I0617 12:04:17.251041  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.251056  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:17.251065  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:17.251128  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:17.282964  165698 cri.go:89] found id: ""
	I0617 12:04:17.282989  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.282998  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:17.283003  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:17.283060  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:17.315570  165698 cri.go:89] found id: ""
	I0617 12:04:17.315601  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.315622  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:17.315630  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:17.315691  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:17.351186  165698 cri.go:89] found id: ""
	I0617 12:04:17.351212  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.351221  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:17.351228  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:17.351287  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:17.385609  165698 cri.go:89] found id: ""
	I0617 12:04:17.385653  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.385665  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:17.385673  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:17.385741  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:17.423890  165698 cri.go:89] found id: ""
	I0617 12:04:17.423923  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.423935  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:17.423944  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:17.424000  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:17.459543  165698 cri.go:89] found id: ""
	I0617 12:04:17.459575  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.459584  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:17.459592  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:17.459660  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:17.495554  165698 cri.go:89] found id: ""
	I0617 12:04:17.495584  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.495594  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:17.495606  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:17.495632  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:17.547835  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:17.547881  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:17.562391  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:17.562422  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:17.635335  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:17.635368  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:17.635387  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:17.708946  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:17.708988  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:13.724618  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:16.224689  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:13.832533  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:15.833210  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:17.841693  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:19.336775  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:21.835598  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:20.249833  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:20.266234  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:20.266301  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:20.307380  165698 cri.go:89] found id: ""
	I0617 12:04:20.307415  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.307424  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:20.307431  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:20.307508  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:20.347193  165698 cri.go:89] found id: ""
	I0617 12:04:20.347225  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.347235  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:20.347243  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:20.347311  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:20.382673  165698 cri.go:89] found id: ""
	I0617 12:04:20.382711  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.382724  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:20.382732  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:20.382800  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:20.419542  165698 cri.go:89] found id: ""
	I0617 12:04:20.419573  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.419582  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:20.419588  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:20.419652  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:20.454586  165698 cri.go:89] found id: ""
	I0617 12:04:20.454618  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.454629  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:20.454636  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:20.454708  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:20.501094  165698 cri.go:89] found id: ""
	I0617 12:04:20.501123  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.501131  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:20.501137  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:20.501190  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:20.537472  165698 cri.go:89] found id: ""
	I0617 12:04:20.537512  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.537524  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:20.537532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:20.537597  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:20.571477  165698 cri.go:89] found id: ""
	I0617 12:04:20.571509  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.571519  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:20.571532  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:20.571550  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:20.611503  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:20.611540  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:20.663868  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:20.663905  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:20.677679  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:20.677704  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:20.753645  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:20.753663  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:20.753689  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:18.725428  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:21.224314  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:20.333214  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:22.333294  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:24.333835  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:26.335344  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:23.335535  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:23.349700  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:23.349766  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:23.384327  165698 cri.go:89] found id: ""
	I0617 12:04:23.384351  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.384358  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:23.384364  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:23.384417  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:23.427145  165698 cri.go:89] found id: ""
	I0617 12:04:23.427179  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.427190  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:23.427197  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:23.427254  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:23.461484  165698 cri.go:89] found id: ""
	I0617 12:04:23.461511  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.461522  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:23.461532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:23.461600  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:23.501292  165698 cri.go:89] found id: ""
	I0617 12:04:23.501324  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.501334  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:23.501342  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:23.501407  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:23.537605  165698 cri.go:89] found id: ""
	I0617 12:04:23.537639  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.537649  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:23.537654  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:23.537727  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:23.576580  165698 cri.go:89] found id: ""
	I0617 12:04:23.576608  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.576616  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:23.576623  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:23.576685  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:23.613124  165698 cri.go:89] found id: ""
	I0617 12:04:23.613153  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.613161  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:23.613167  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:23.613216  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:23.648662  165698 cri.go:89] found id: ""
	I0617 12:04:23.648688  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.648695  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:23.648705  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:23.648717  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:23.661737  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:23.661762  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:23.732512  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:23.732531  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:23.732547  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:23.810165  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:23.810207  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:23.855099  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:23.855136  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:26.406038  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:26.422243  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:26.422323  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:26.460959  165698 cri.go:89] found id: ""
	I0617 12:04:26.460984  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.460994  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:26.461002  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:26.461078  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:26.498324  165698 cri.go:89] found id: ""
	I0617 12:04:26.498350  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.498362  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:26.498370  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:26.498435  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:26.535299  165698 cri.go:89] found id: ""
	I0617 12:04:26.535335  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.535346  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:26.535354  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:26.535417  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:26.574623  165698 cri.go:89] found id: ""
	I0617 12:04:26.574657  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.574668  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:26.574677  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:26.574738  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:26.611576  165698 cri.go:89] found id: ""
	I0617 12:04:26.611607  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.611615  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:26.611621  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:26.611672  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:26.645664  165698 cri.go:89] found id: ""
	I0617 12:04:26.645692  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.645700  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:26.645706  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:26.645755  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:26.679442  165698 cri.go:89] found id: ""
	I0617 12:04:26.679477  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.679488  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:26.679495  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:26.679544  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:26.713512  165698 cri.go:89] found id: ""
	I0617 12:04:26.713543  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.713551  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:26.713563  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:26.713584  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:26.770823  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:26.770853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:26.784829  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:26.784858  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:26.868457  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:26.868480  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:26.868498  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:26.948522  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:26.948561  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:23.725626  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:26.224874  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:24.830639  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:26.836648  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:28.835682  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:31.335891  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:29.490891  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:29.504202  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:29.504273  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:29.544091  165698 cri.go:89] found id: ""
	I0617 12:04:29.544125  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.544137  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:29.544145  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:29.544203  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:29.581645  165698 cri.go:89] found id: ""
	I0617 12:04:29.581670  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.581679  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:29.581685  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:29.581736  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:29.621410  165698 cri.go:89] found id: ""
	I0617 12:04:29.621437  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.621447  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:29.621455  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:29.621522  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:29.659619  165698 cri.go:89] found id: ""
	I0617 12:04:29.659645  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.659654  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:29.659659  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:29.659718  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:29.698822  165698 cri.go:89] found id: ""
	I0617 12:04:29.698851  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.698859  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:29.698865  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:29.698957  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:29.741648  165698 cri.go:89] found id: ""
	I0617 12:04:29.741673  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.741680  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:29.741686  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:29.741752  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:29.777908  165698 cri.go:89] found id: ""
	I0617 12:04:29.777933  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.777941  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:29.777947  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:29.778013  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:29.812290  165698 cri.go:89] found id: ""
	I0617 12:04:29.812318  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.812328  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:29.812340  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:29.812357  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:29.857527  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:29.857552  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:29.916734  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:29.916776  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:29.930988  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:29.931013  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:30.006055  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:30.006080  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:30.006098  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:32.586549  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:32.600139  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:32.600262  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:32.641527  165698 cri.go:89] found id: ""
	I0617 12:04:32.641554  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.641570  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:32.641579  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:32.641635  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:32.687945  165698 cri.go:89] found id: ""
	I0617 12:04:32.687972  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.687981  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:32.687996  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:32.688068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:32.725586  165698 cri.go:89] found id: ""
	I0617 12:04:32.725618  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.725629  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:32.725639  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:32.725696  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:32.764042  165698 cri.go:89] found id: ""
	I0617 12:04:32.764090  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.764107  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:32.764115  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:32.764183  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:32.800132  165698 cri.go:89] found id: ""
	I0617 12:04:32.800167  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.800180  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:32.800189  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:32.800256  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:32.840313  165698 cri.go:89] found id: ""
	I0617 12:04:32.840348  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.840359  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:32.840367  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:32.840434  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:32.878041  165698 cri.go:89] found id: ""
	I0617 12:04:32.878067  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.878076  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:32.878082  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:32.878134  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:32.913904  165698 cri.go:89] found id: ""
	I0617 12:04:32.913939  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.913950  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:32.913961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:32.913974  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:04:28.725534  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:31.224885  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:29.330706  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:31.331989  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:33.337062  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:35.834807  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	W0617 12:04:32.987900  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:32.987929  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:32.987947  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:33.060919  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:33.060961  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:33.102602  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:33.102629  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:33.154112  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:33.154161  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:35.669336  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:35.682819  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:35.682907  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:35.717542  165698 cri.go:89] found id: ""
	I0617 12:04:35.717571  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.717579  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:35.717586  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:35.717646  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:35.754454  165698 cri.go:89] found id: ""
	I0617 12:04:35.754483  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.754495  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:35.754503  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:35.754566  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:35.791198  165698 cri.go:89] found id: ""
	I0617 12:04:35.791227  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.791237  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:35.791246  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:35.791309  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:35.826858  165698 cri.go:89] found id: ""
	I0617 12:04:35.826892  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.826903  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:35.826911  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:35.826974  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:35.866817  165698 cri.go:89] found id: ""
	I0617 12:04:35.866845  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.866853  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:35.866861  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:35.866909  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:35.918340  165698 cri.go:89] found id: ""
	I0617 12:04:35.918377  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.918388  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:35.918397  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:35.918466  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:35.960734  165698 cri.go:89] found id: ""
	I0617 12:04:35.960764  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.960774  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:35.960779  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:35.960841  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:36.002392  165698 cri.go:89] found id: ""
	I0617 12:04:36.002426  165698 logs.go:276] 0 containers: []
	W0617 12:04:36.002437  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:36.002449  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:36.002465  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:36.055130  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:36.055163  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:36.069181  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:36.069209  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:36.146078  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:36.146105  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:36.146120  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:36.223763  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:36.223797  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:33.723759  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:35.725954  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:38.225200  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:33.833990  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:36.332152  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:38.332570  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:37.836765  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:40.334594  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:42.336958  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:38.767375  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:38.781301  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:38.781357  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:38.821364  165698 cri.go:89] found id: ""
	I0617 12:04:38.821390  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.821400  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:38.821409  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:38.821472  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:38.860727  165698 cri.go:89] found id: ""
	I0617 12:04:38.860784  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.860796  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:38.860803  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:38.860868  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:38.902932  165698 cri.go:89] found id: ""
	I0617 12:04:38.902968  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.902992  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:38.902999  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:38.903088  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:38.940531  165698 cri.go:89] found id: ""
	I0617 12:04:38.940564  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.940576  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:38.940584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:38.940649  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:38.975751  165698 cri.go:89] found id: ""
	I0617 12:04:38.975792  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.975827  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:38.975835  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:38.975908  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:39.011156  165698 cri.go:89] found id: ""
	I0617 12:04:39.011196  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.011206  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:39.011213  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:39.011269  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:39.049266  165698 cri.go:89] found id: ""
	I0617 12:04:39.049301  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.049312  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:39.049320  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:39.049373  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:39.089392  165698 cri.go:89] found id: ""
	I0617 12:04:39.089425  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.089434  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:39.089444  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:39.089459  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:39.166585  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:39.166607  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:39.166619  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:39.241910  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:39.241950  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:39.287751  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:39.287782  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:39.342226  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:39.342259  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:41.857327  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:41.871379  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:41.871446  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:41.907435  165698 cri.go:89] found id: ""
	I0617 12:04:41.907472  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.907483  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:41.907492  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:41.907542  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:41.941684  165698 cri.go:89] found id: ""
	I0617 12:04:41.941725  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.941737  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:41.941745  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:41.941819  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:41.977359  165698 cri.go:89] found id: ""
	I0617 12:04:41.977395  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.977407  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:41.977415  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:41.977478  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:42.015689  165698 cri.go:89] found id: ""
	I0617 12:04:42.015723  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.015734  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:42.015742  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:42.015803  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:42.050600  165698 cri.go:89] found id: ""
	I0617 12:04:42.050626  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.050637  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:42.050645  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:42.050707  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:42.088174  165698 cri.go:89] found id: ""
	I0617 12:04:42.088201  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.088212  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:42.088221  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:42.088290  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:42.127335  165698 cri.go:89] found id: ""
	I0617 12:04:42.127364  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.127375  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:42.127384  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:42.127443  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:42.163435  165698 cri.go:89] found id: ""
	I0617 12:04:42.163481  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.163492  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:42.163505  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:42.163527  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:42.233233  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:42.233262  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:42.233280  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:42.311695  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:42.311741  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:42.378134  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:42.378163  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:42.439614  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:42.439647  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:40.726373  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:43.225144  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:40.336291  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:42.831220  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:44.835811  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:47.335772  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:44.953738  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:44.967822  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:44.967884  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:45.004583  165698 cri.go:89] found id: ""
	I0617 12:04:45.004687  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.004732  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:45.004741  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:45.004797  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:45.038912  165698 cri.go:89] found id: ""
	I0617 12:04:45.038939  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.038949  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:45.038957  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:45.039026  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:45.073594  165698 cri.go:89] found id: ""
	I0617 12:04:45.073620  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.073628  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:45.073634  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:45.073684  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:45.108225  165698 cri.go:89] found id: ""
	I0617 12:04:45.108253  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.108261  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:45.108267  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:45.108317  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:45.139522  165698 cri.go:89] found id: ""
	I0617 12:04:45.139545  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.139553  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:45.139559  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:45.139609  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:45.173705  165698 cri.go:89] found id: ""
	I0617 12:04:45.173735  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.173745  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:45.173752  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:45.173813  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:45.206448  165698 cri.go:89] found id: ""
	I0617 12:04:45.206477  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.206486  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:45.206493  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:45.206551  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:45.242925  165698 cri.go:89] found id: ""
	I0617 12:04:45.242952  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.242962  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:45.242981  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:45.242998  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:45.294669  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:45.294700  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:45.307642  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:45.307670  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:45.381764  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:45.381788  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:45.381805  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:45.469022  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:45.469056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:45.724236  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:48.225656  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:45.332888  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:47.832326  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:49.337260  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:51.338718  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:48.014169  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:48.029895  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:48.029984  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:48.086421  165698 cri.go:89] found id: ""
	I0617 12:04:48.086456  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.086468  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:48.086477  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:48.086554  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:48.135673  165698 cri.go:89] found id: ""
	I0617 12:04:48.135705  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.135713  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:48.135733  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:48.135808  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:48.184330  165698 cri.go:89] found id: ""
	I0617 12:04:48.184353  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.184362  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:48.184368  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:48.184418  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:48.221064  165698 cri.go:89] found id: ""
	I0617 12:04:48.221095  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.221103  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:48.221112  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:48.221175  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:48.264464  165698 cri.go:89] found id: ""
	I0617 12:04:48.264495  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.264502  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:48.264508  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:48.264561  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:48.302144  165698 cri.go:89] found id: ""
	I0617 12:04:48.302180  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.302191  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:48.302199  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:48.302263  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:48.345431  165698 cri.go:89] found id: ""
	I0617 12:04:48.345458  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.345465  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:48.345472  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:48.345539  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:48.383390  165698 cri.go:89] found id: ""
	I0617 12:04:48.383423  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.383434  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:48.383447  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:48.383478  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:48.422328  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:48.422356  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:48.473698  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:48.473735  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:48.488399  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:48.488429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:48.566851  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:48.566871  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:48.566884  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:51.149626  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:51.162855  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:51.162926  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:51.199056  165698 cri.go:89] found id: ""
	I0617 12:04:51.199091  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.199102  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:51.199109  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:51.199172  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:51.238773  165698 cri.go:89] found id: ""
	I0617 12:04:51.238810  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.238821  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:51.238827  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:51.238883  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:51.279049  165698 cri.go:89] found id: ""
	I0617 12:04:51.279079  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.279092  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:51.279100  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:51.279166  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:51.324923  165698 cri.go:89] found id: ""
	I0617 12:04:51.324957  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.324969  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:51.324976  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:51.325028  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:51.363019  165698 cri.go:89] found id: ""
	I0617 12:04:51.363055  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.363068  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:51.363077  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:51.363142  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:51.399620  165698 cri.go:89] found id: ""
	I0617 12:04:51.399652  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.399661  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:51.399675  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:51.399758  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:51.434789  165698 cri.go:89] found id: ""
	I0617 12:04:51.434824  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.434836  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:51.434844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:51.434910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:51.470113  165698 cri.go:89] found id: ""
	I0617 12:04:51.470140  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.470149  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:51.470160  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:51.470176  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:51.526138  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:51.526173  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:51.539451  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:51.539491  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:51.613418  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:51.613437  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:51.613450  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:51.691971  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:51.692010  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:50.724405  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:52.725426  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:50.332363  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:52.332932  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:53.834955  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:56.334584  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:54.234514  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:54.249636  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:54.249724  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:54.283252  165698 cri.go:89] found id: ""
	I0617 12:04:54.283287  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.283300  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:54.283307  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:54.283367  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:54.319153  165698 cri.go:89] found id: ""
	I0617 12:04:54.319207  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.319218  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:54.319226  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:54.319290  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:54.361450  165698 cri.go:89] found id: ""
	I0617 12:04:54.361480  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.361491  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:54.361498  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:54.361562  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:54.397806  165698 cri.go:89] found id: ""
	I0617 12:04:54.397834  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.397843  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:54.397849  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:54.397899  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:54.447119  165698 cri.go:89] found id: ""
	I0617 12:04:54.447147  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.447155  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:54.447161  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:54.447211  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:54.489717  165698 cri.go:89] found id: ""
	I0617 12:04:54.489751  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.489760  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:54.489766  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:54.489830  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:54.532840  165698 cri.go:89] found id: ""
	I0617 12:04:54.532943  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.532975  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:54.532989  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:54.533100  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:54.568227  165698 cri.go:89] found id: ""
	I0617 12:04:54.568369  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.568391  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:54.568403  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:54.568420  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:54.583140  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:54.583174  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:54.661258  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:54.661281  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:54.661296  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:54.750472  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:54.750511  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:54.797438  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:54.797467  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:57.349800  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:57.364820  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:57.364879  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:57.405065  165698 cri.go:89] found id: ""
	I0617 12:04:57.405093  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.405101  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:57.405106  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:57.405153  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:57.445707  165698 cri.go:89] found id: ""
	I0617 12:04:57.445741  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.445752  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:57.445760  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:57.445829  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:57.486911  165698 cri.go:89] found id: ""
	I0617 12:04:57.486940  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.486948  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:57.486955  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:57.487014  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:57.521218  165698 cri.go:89] found id: ""
	I0617 12:04:57.521254  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.521266  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:57.521274  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:57.521342  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:57.555762  165698 cri.go:89] found id: ""
	I0617 12:04:57.555794  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.555803  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:57.555808  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:57.555863  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:57.591914  165698 cri.go:89] found id: ""
	I0617 12:04:57.591945  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.591956  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:57.591971  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:57.592037  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:57.626435  165698 cri.go:89] found id: ""
	I0617 12:04:57.626463  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.626471  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:57.626477  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:57.626527  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:57.665088  165698 cri.go:89] found id: ""
	I0617 12:04:57.665118  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.665126  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:57.665137  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:57.665152  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:57.716284  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:57.716316  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:57.730179  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:57.730204  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:57.808904  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:57.808933  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:57.808954  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:57.894499  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:57.894530  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:55.224507  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:57.224583  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:54.831112  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:56.832477  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:58.334640  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:00.335137  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:00.435957  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:00.450812  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:00.450890  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:00.491404  165698 cri.go:89] found id: ""
	I0617 12:05:00.491432  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.491440  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:00.491446  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:00.491523  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:00.526711  165698 cri.go:89] found id: ""
	I0617 12:05:00.526739  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.526747  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:00.526753  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:00.526817  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:00.562202  165698 cri.go:89] found id: ""
	I0617 12:05:00.562236  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.562246  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:00.562255  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:00.562323  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:00.602754  165698 cri.go:89] found id: ""
	I0617 12:05:00.602790  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.602802  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:00.602811  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:00.602877  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:00.645666  165698 cri.go:89] found id: ""
	I0617 12:05:00.645703  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.645715  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:00.645723  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:00.645788  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:00.684649  165698 cri.go:89] found id: ""
	I0617 12:05:00.684685  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.684694  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:00.684701  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:00.684784  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:00.727139  165698 cri.go:89] found id: ""
	I0617 12:05:00.727160  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.727167  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:00.727173  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:00.727238  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:00.764401  165698 cri.go:89] found id: ""
	I0617 12:05:00.764433  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.764444  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:00.764455  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:00.764474  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:00.777301  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:00.777322  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:00.849752  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:00.849778  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:00.849795  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:00.930220  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:00.930266  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:00.970076  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:00.970116  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:59.226429  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:01.725079  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:59.337081  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:01.834932  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:02.834132  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:05.334066  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:07.335366  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:03.526070  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:03.541150  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:03.541229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:03.584416  165698 cri.go:89] found id: ""
	I0617 12:05:03.584451  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.584463  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:03.584472  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:03.584535  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:03.623509  165698 cri.go:89] found id: ""
	I0617 12:05:03.623543  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.623552  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:03.623558  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:03.623611  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:03.661729  165698 cri.go:89] found id: ""
	I0617 12:05:03.661765  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.661778  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:03.661787  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:03.661852  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:03.702952  165698 cri.go:89] found id: ""
	I0617 12:05:03.702985  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.703008  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:03.703033  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:03.703100  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:03.746534  165698 cri.go:89] found id: ""
	I0617 12:05:03.746570  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.746578  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:03.746584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:03.746648  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:03.784472  165698 cri.go:89] found id: ""
	I0617 12:05:03.784506  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.784515  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:03.784522  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:03.784580  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:03.821033  165698 cri.go:89] found id: ""
	I0617 12:05:03.821066  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.821077  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:03.821085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:03.821146  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:03.859438  165698 cri.go:89] found id: ""
	I0617 12:05:03.859474  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.859487  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:03.859497  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:03.859513  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:03.940723  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:03.940770  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:03.986267  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:03.986303  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:04.037999  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:04.038039  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:04.051382  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:04.051415  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:04.121593  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:06.622475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:06.636761  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:06.636842  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:06.673954  165698 cri.go:89] found id: ""
	I0617 12:05:06.673995  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.674007  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:06.674015  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:06.674084  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:06.708006  165698 cri.go:89] found id: ""
	I0617 12:05:06.708037  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.708047  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:06.708055  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:06.708124  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:06.743819  165698 cri.go:89] found id: ""
	I0617 12:05:06.743852  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.743864  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:06.743872  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:06.743934  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:06.781429  165698 cri.go:89] found id: ""
	I0617 12:05:06.781457  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.781465  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:06.781473  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:06.781540  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:06.818404  165698 cri.go:89] found id: ""
	I0617 12:05:06.818435  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.818447  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:06.818456  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:06.818516  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:06.857880  165698 cri.go:89] found id: ""
	I0617 12:05:06.857913  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.857924  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:06.857933  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:06.857993  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:06.893010  165698 cri.go:89] found id: ""
	I0617 12:05:06.893050  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.893059  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:06.893065  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:06.893118  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:06.926302  165698 cri.go:89] found id: ""
	I0617 12:05:06.926336  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.926347  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:06.926360  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:06.926378  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:06.997173  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:06.997197  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:06.997215  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:07.082843  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:07.082885  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:07.122542  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:07.122572  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:07.177033  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:07.177070  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:03.725338  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:06.225466  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:04.331639  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:06.331988  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:08.332139  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:09.835119  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:12.333346  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:09.693217  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:09.707043  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:09.707110  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:09.742892  165698 cri.go:89] found id: ""
	I0617 12:05:09.742918  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.742927  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:09.742933  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:09.742982  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:09.776938  165698 cri.go:89] found id: ""
	I0617 12:05:09.776969  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.776976  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:09.776982  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:09.777030  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:09.813613  165698 cri.go:89] found id: ""
	I0617 12:05:09.813643  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.813651  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:09.813658  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:09.813705  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:09.855483  165698 cri.go:89] found id: ""
	I0617 12:05:09.855516  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.855525  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:09.855532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:09.855596  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:09.890808  165698 cri.go:89] found id: ""
	I0617 12:05:09.890844  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.890854  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:09.890862  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:09.890930  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:09.927656  165698 cri.go:89] found id: ""
	I0617 12:05:09.927684  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.927693  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:09.927703  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:09.927758  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:09.968130  165698 cri.go:89] found id: ""
	I0617 12:05:09.968163  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.968174  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:09.968183  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:09.968239  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:10.010197  165698 cri.go:89] found id: ""
	I0617 12:05:10.010220  165698 logs.go:276] 0 containers: []
	W0617 12:05:10.010228  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:10.010239  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:10.010252  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:10.063999  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:10.064040  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:10.078837  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:10.078873  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:10.155932  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:10.155954  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:10.155967  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:10.232859  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:10.232901  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:12.772943  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:12.787936  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:12.788024  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:12.828457  165698 cri.go:89] found id: ""
	I0617 12:05:12.828483  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.828491  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:12.828498  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:12.828562  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:12.862265  165698 cri.go:89] found id: ""
	I0617 12:05:12.862296  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.862306  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:12.862313  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:12.862372  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:12.899673  165698 cri.go:89] found id: ""
	I0617 12:05:12.899698  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.899706  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:12.899712  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:12.899759  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:12.943132  165698 cri.go:89] found id: ""
	I0617 12:05:12.943161  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.943169  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:12.943175  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:12.943227  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:08.724369  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:10.725166  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:13.224799  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:10.333769  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:12.832493  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:14.336437  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:16.835155  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:12.985651  165698 cri.go:89] found id: ""
	I0617 12:05:12.985677  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.985685  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:12.985691  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:12.985747  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:13.021484  165698 cri.go:89] found id: ""
	I0617 12:05:13.021508  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.021516  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:13.021522  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:13.021569  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:13.060658  165698 cri.go:89] found id: ""
	I0617 12:05:13.060689  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.060705  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:13.060713  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:13.060782  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:13.106008  165698 cri.go:89] found id: ""
	I0617 12:05:13.106041  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.106052  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:13.106066  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:13.106083  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:13.160199  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:13.160231  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:13.173767  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:13.173804  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:13.245358  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:13.245383  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:13.245399  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:13.323046  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:13.323085  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:15.872024  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:15.885550  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:15.885624  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:15.920303  165698 cri.go:89] found id: ""
	I0617 12:05:15.920332  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.920344  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:15.920358  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:15.920423  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:15.955132  165698 cri.go:89] found id: ""
	I0617 12:05:15.955158  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.955166  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:15.955172  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:15.955220  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:15.992995  165698 cri.go:89] found id: ""
	I0617 12:05:15.993034  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.993053  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:15.993060  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:15.993127  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:16.032603  165698 cri.go:89] found id: ""
	I0617 12:05:16.032638  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.032650  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:16.032658  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:16.032716  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:16.071770  165698 cri.go:89] found id: ""
	I0617 12:05:16.071804  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.071816  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:16.071824  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:16.071899  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:16.106172  165698 cri.go:89] found id: ""
	I0617 12:05:16.106206  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.106218  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:16.106226  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:16.106292  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:16.139406  165698 cri.go:89] found id: ""
	I0617 12:05:16.139436  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.139443  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:16.139449  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:16.139517  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:16.174513  165698 cri.go:89] found id: ""
	I0617 12:05:16.174554  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.174565  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:16.174580  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:16.174597  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:16.240912  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:16.240940  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:16.240958  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:16.323853  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:16.323891  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:16.372632  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:16.372659  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:16.428367  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:16.428406  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:15.224918  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:17.725226  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:15.332512  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:17.833710  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:19.334324  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:21.334654  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:18.943551  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:18.957394  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:18.957490  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:18.991967  165698 cri.go:89] found id: ""
	I0617 12:05:18.992006  165698 logs.go:276] 0 containers: []
	W0617 12:05:18.992017  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:18.992027  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:18.992092  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:19.025732  165698 cri.go:89] found id: ""
	I0617 12:05:19.025763  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.025775  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:19.025783  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:19.025856  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:19.061786  165698 cri.go:89] found id: ""
	I0617 12:05:19.061820  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.061830  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:19.061838  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:19.061906  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:19.098819  165698 cri.go:89] found id: ""
	I0617 12:05:19.098856  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.098868  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:19.098876  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:19.098947  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:19.139840  165698 cri.go:89] found id: ""
	I0617 12:05:19.139877  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.139886  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:19.139894  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:19.139965  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:19.176546  165698 cri.go:89] found id: ""
	I0617 12:05:19.176578  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.176590  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:19.176598  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:19.176671  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:19.209948  165698 cri.go:89] found id: ""
	I0617 12:05:19.209985  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.209997  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:19.210005  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:19.210087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:19.246751  165698 cri.go:89] found id: ""
	I0617 12:05:19.246788  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.246799  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:19.246812  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:19.246830  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:19.322272  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:19.322316  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:19.370147  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:19.370187  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:19.422699  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:19.422749  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:19.437255  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:19.437284  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:19.510077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:22.010840  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:22.024791  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:22.024879  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:22.060618  165698 cri.go:89] found id: ""
	I0617 12:05:22.060658  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.060667  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:22.060674  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:22.060742  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:22.100228  165698 cri.go:89] found id: ""
	I0617 12:05:22.100259  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.100268  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:22.100274  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:22.100343  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:22.135629  165698 cri.go:89] found id: ""
	I0617 12:05:22.135657  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.135665  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:22.135671  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:22.135730  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:22.186027  165698 cri.go:89] found id: ""
	I0617 12:05:22.186064  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.186076  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:22.186085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:22.186148  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:22.220991  165698 cri.go:89] found id: ""
	I0617 12:05:22.221019  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.221029  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:22.221037  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:22.221104  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:22.266306  165698 cri.go:89] found id: ""
	I0617 12:05:22.266337  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.266348  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:22.266357  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:22.266414  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:22.303070  165698 cri.go:89] found id: ""
	I0617 12:05:22.303104  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.303116  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:22.303124  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:22.303190  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:22.339792  165698 cri.go:89] found id: ""
	I0617 12:05:22.339819  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.339829  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:22.339840  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:22.339856  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:22.422360  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:22.422397  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:22.465744  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:22.465777  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:22.516199  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:22.516232  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:22.529961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:22.529983  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:22.601519  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:20.225369  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:22.226699  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:19.834562  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:21.837426  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:23.336540  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:25.835706  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:25.102655  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:25.116893  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:25.116959  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:25.156370  165698 cri.go:89] found id: ""
	I0617 12:05:25.156396  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.156404  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:25.156410  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:25.156468  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:25.193123  165698 cri.go:89] found id: ""
	I0617 12:05:25.193199  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.193221  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:25.193234  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:25.193301  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:25.232182  165698 cri.go:89] found id: ""
	I0617 12:05:25.232209  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.232219  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:25.232227  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:25.232285  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:25.266599  165698 cri.go:89] found id: ""
	I0617 12:05:25.266630  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.266639  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:25.266645  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:25.266701  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:25.308732  165698 cri.go:89] found id: ""
	I0617 12:05:25.308762  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.308770  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:25.308776  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:25.308836  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:25.348817  165698 cri.go:89] found id: ""
	I0617 12:05:25.348858  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.348871  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:25.348879  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:25.348946  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:25.389343  165698 cri.go:89] found id: ""
	I0617 12:05:25.389375  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.389387  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:25.389393  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:25.389452  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:25.427014  165698 cri.go:89] found id: ""
	I0617 12:05:25.427043  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.427055  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:25.427067  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:25.427083  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:25.441361  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:25.441390  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:25.518967  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:25.518993  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:25.519006  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:25.601411  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:25.601450  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:25.651636  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:25.651674  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:24.725515  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:27.223821  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:24.333548  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:26.832428  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:27.836661  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:30.334313  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:32.336489  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:28.202148  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:28.215710  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:28.215792  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:28.254961  165698 cri.go:89] found id: ""
	I0617 12:05:28.254986  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.255000  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:28.255007  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:28.255061  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:28.292574  165698 cri.go:89] found id: ""
	I0617 12:05:28.292606  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.292614  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:28.292620  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:28.292683  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:28.329036  165698 cri.go:89] found id: ""
	I0617 12:05:28.329067  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.329077  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:28.329085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:28.329152  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:28.366171  165698 cri.go:89] found id: ""
	I0617 12:05:28.366197  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.366206  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:28.366212  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:28.366273  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:28.401380  165698 cri.go:89] found id: ""
	I0617 12:05:28.401407  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.401417  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:28.401424  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:28.401486  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:28.438767  165698 cri.go:89] found id: ""
	I0617 12:05:28.438798  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.438810  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:28.438817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:28.438876  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:28.472706  165698 cri.go:89] found id: ""
	I0617 12:05:28.472761  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.472772  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:28.472779  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:28.472829  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:28.509525  165698 cri.go:89] found id: ""
	I0617 12:05:28.509548  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.509556  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:28.509565  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:28.509577  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:28.606008  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:28.606059  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:28.665846  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:28.665874  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:28.721599  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:28.721627  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:28.735040  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:28.735062  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:28.811954  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:31.312554  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:31.326825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:31.326905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:31.364862  165698 cri.go:89] found id: ""
	I0617 12:05:31.364891  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.364902  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:31.364910  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:31.364976  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:31.396979  165698 cri.go:89] found id: ""
	I0617 12:05:31.397013  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.397027  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:31.397035  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:31.397098  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:31.430617  165698 cri.go:89] found id: ""
	I0617 12:05:31.430647  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.430657  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:31.430665  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:31.430728  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:31.462308  165698 cri.go:89] found id: ""
	I0617 12:05:31.462338  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.462345  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:31.462350  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:31.462399  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:31.495406  165698 cri.go:89] found id: ""
	I0617 12:05:31.495435  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.495444  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:31.495452  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:31.495553  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:31.538702  165698 cri.go:89] found id: ""
	I0617 12:05:31.538729  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.538739  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:31.538750  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:31.538813  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:31.572637  165698 cri.go:89] found id: ""
	I0617 12:05:31.572666  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.572677  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:31.572685  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:31.572745  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:31.609307  165698 cri.go:89] found id: ""
	I0617 12:05:31.609341  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.609352  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:31.609364  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:31.609380  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:31.622445  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:31.622471  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:31.699170  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:31.699191  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:31.699209  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:31.775115  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:31.775156  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:31.815836  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:31.815866  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:29.225028  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:31.727009  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:29.333400  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:31.834599  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:34.836093  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:37.335140  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:34.372097  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:34.393542  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:34.393607  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:34.437265  165698 cri.go:89] found id: ""
	I0617 12:05:34.437294  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.437305  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:34.437314  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:34.437382  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:34.474566  165698 cri.go:89] found id: ""
	I0617 12:05:34.474596  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.474609  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:34.474617  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:34.474680  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:34.510943  165698 cri.go:89] found id: ""
	I0617 12:05:34.510975  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.510986  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:34.511000  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:34.511072  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:34.548124  165698 cri.go:89] found id: ""
	I0617 12:05:34.548160  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.548172  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:34.548179  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:34.548241  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:34.582428  165698 cri.go:89] found id: ""
	I0617 12:05:34.582453  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.582460  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:34.582467  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:34.582514  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:34.616895  165698 cri.go:89] found id: ""
	I0617 12:05:34.616937  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.616950  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:34.616957  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:34.617019  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:34.656116  165698 cri.go:89] found id: ""
	I0617 12:05:34.656144  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.656155  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:34.656162  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:34.656226  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:34.695649  165698 cri.go:89] found id: ""
	I0617 12:05:34.695680  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.695692  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:34.695705  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:34.695722  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:34.747910  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:34.747956  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:34.762177  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:34.762206  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:34.840395  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:34.840423  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:34.840440  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:34.922962  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:34.923002  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:37.464659  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:37.480351  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:37.480416  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:37.521249  165698 cri.go:89] found id: ""
	I0617 12:05:37.521279  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.521286  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:37.521293  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:37.521340  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:37.561053  165698 cri.go:89] found id: ""
	I0617 12:05:37.561079  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.561087  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:37.561094  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:37.561151  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:37.599019  165698 cri.go:89] found id: ""
	I0617 12:05:37.599057  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.599066  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:37.599074  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:37.599134  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:37.638276  165698 cri.go:89] found id: ""
	I0617 12:05:37.638304  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.638315  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:37.638323  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:37.638389  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:37.677819  165698 cri.go:89] found id: ""
	I0617 12:05:37.677845  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.677853  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:37.677859  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:37.677910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:37.715850  165698 cri.go:89] found id: ""
	I0617 12:05:37.715877  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.715888  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:37.715897  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:37.715962  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:37.755533  165698 cri.go:89] found id: ""
	I0617 12:05:37.755563  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.755570  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:37.755576  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:37.755636  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:37.791826  165698 cri.go:89] found id: ""
	I0617 12:05:37.791850  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.791859  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:37.791872  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:37.791888  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:37.844824  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:37.844853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:37.860933  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:37.860963  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:37.926497  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:37.926519  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:37.926535  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:34.224078  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:36.224464  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:38.224753  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:34.333888  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:36.832374  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:39.336299  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:41.834494  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:38.003814  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:38.003853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:40.546386  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:40.560818  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:40.560896  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:40.596737  165698 cri.go:89] found id: ""
	I0617 12:05:40.596777  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.596784  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:40.596791  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:40.596844  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:40.631518  165698 cri.go:89] found id: ""
	I0617 12:05:40.631556  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.631570  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:40.631611  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:40.631683  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:40.674962  165698 cri.go:89] found id: ""
	I0617 12:05:40.674997  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.675006  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:40.675012  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:40.675064  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:40.716181  165698 cri.go:89] found id: ""
	I0617 12:05:40.716210  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.716218  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:40.716226  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:40.716286  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:40.756312  165698 cri.go:89] found id: ""
	I0617 12:05:40.756339  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.756348  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:40.756353  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:40.756406  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:40.791678  165698 cri.go:89] found id: ""
	I0617 12:05:40.791733  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.791750  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:40.791759  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:40.791830  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:40.830717  165698 cri.go:89] found id: ""
	I0617 12:05:40.830754  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.830766  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:40.830774  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:40.830854  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:40.868139  165698 cri.go:89] found id: ""
	I0617 12:05:40.868169  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.868178  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:40.868198  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:40.868224  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:40.920319  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:40.920353  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:40.934948  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:40.934974  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:41.005349  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:41.005371  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:41.005388  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:41.086783  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:41.086842  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:40.724767  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:43.223836  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:38.834031  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:41.331190  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:43.332593  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:44.334114  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:46.334595  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:43.625515  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:43.638942  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:43.639019  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:43.673703  165698 cri.go:89] found id: ""
	I0617 12:05:43.673735  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.673747  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:43.673756  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:43.673822  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:43.709417  165698 cri.go:89] found id: ""
	I0617 12:05:43.709449  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.709460  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:43.709468  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:43.709529  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:43.742335  165698 cri.go:89] found id: ""
	I0617 12:05:43.742368  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.742379  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:43.742389  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:43.742449  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:43.779112  165698 cri.go:89] found id: ""
	I0617 12:05:43.779141  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.779150  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:43.779155  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:43.779219  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:43.813362  165698 cri.go:89] found id: ""
	I0617 12:05:43.813397  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.813406  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:43.813414  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:43.813464  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:43.850456  165698 cri.go:89] found id: ""
	I0617 12:05:43.850484  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.850493  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:43.850499  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:43.850547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:43.884527  165698 cri.go:89] found id: ""
	I0617 12:05:43.884555  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.884564  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:43.884571  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:43.884632  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:43.921440  165698 cri.go:89] found id: ""
	I0617 12:05:43.921476  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.921488  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:43.921501  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:43.921517  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:43.973687  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:43.973727  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:43.988114  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:43.988143  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:44.055084  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:44.055119  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:44.055138  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:44.134628  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:44.134665  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:46.677852  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:46.690688  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:46.690747  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:46.724055  165698 cri.go:89] found id: ""
	I0617 12:05:46.724090  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.724101  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:46.724110  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:46.724171  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:46.759119  165698 cri.go:89] found id: ""
	I0617 12:05:46.759150  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.759161  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:46.759169  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:46.759227  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:46.796392  165698 cri.go:89] found id: ""
	I0617 12:05:46.796424  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.796435  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:46.796442  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:46.796504  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:46.831727  165698 cri.go:89] found id: ""
	I0617 12:05:46.831761  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.831770  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:46.831777  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:46.831845  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:46.866662  165698 cri.go:89] found id: ""
	I0617 12:05:46.866693  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.866702  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:46.866708  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:46.866757  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:46.905045  165698 cri.go:89] found id: ""
	I0617 12:05:46.905070  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.905078  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:46.905084  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:46.905130  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:46.940879  165698 cri.go:89] found id: ""
	I0617 12:05:46.940907  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.940915  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:46.940926  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:46.940974  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:46.977247  165698 cri.go:89] found id: ""
	I0617 12:05:46.977290  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.977301  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:46.977314  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:46.977331  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:47.046094  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:47.046116  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:47.046133  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:47.122994  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:47.123038  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:47.166273  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:47.166313  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:47.221392  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:47.221429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:45.228807  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:47.723584  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:45.834805  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:48.333121  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:48.335758  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:50.833989  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:49.739113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:49.752880  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:49.753004  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:49.791177  165698 cri.go:89] found id: ""
	I0617 12:05:49.791218  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.791242  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:49.791251  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:49.791322  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:49.831602  165698 cri.go:89] found id: ""
	I0617 12:05:49.831633  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.831644  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:49.831652  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:49.831719  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:49.870962  165698 cri.go:89] found id: ""
	I0617 12:05:49.870998  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.871011  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:49.871019  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:49.871092  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:49.917197  165698 cri.go:89] found id: ""
	I0617 12:05:49.917232  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.917243  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:49.917252  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:49.917320  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:49.952997  165698 cri.go:89] found id: ""
	I0617 12:05:49.953034  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.953047  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:49.953056  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:49.953114  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:50.001925  165698 cri.go:89] found id: ""
	I0617 12:05:50.001965  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.001977  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:50.001986  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:50.002059  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:50.043374  165698 cri.go:89] found id: ""
	I0617 12:05:50.043403  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.043412  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:50.043419  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:50.043496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:50.082974  165698 cri.go:89] found id: ""
	I0617 12:05:50.083009  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.083020  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:50.083029  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:50.083043  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:50.134116  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:50.134159  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:50.148478  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:50.148511  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:50.227254  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:50.227276  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:50.227288  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:50.305920  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:50.305960  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:52.848811  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:52.862612  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:52.862669  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:52.896379  165698 cri.go:89] found id: ""
	I0617 12:05:52.896410  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.896421  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:52.896429  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:52.896488  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:52.933387  165698 cri.go:89] found id: ""
	I0617 12:05:52.933422  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.933432  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:52.933439  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:52.933501  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:52.971055  165698 cri.go:89] found id: ""
	I0617 12:05:52.971091  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.971102  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:52.971110  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:52.971168  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:49.724816  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:52.224660  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:50.334092  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:52.831686  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:52.835473  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:55.334017  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:57.334957  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:53.003815  165698 cri.go:89] found id: ""
	I0617 12:05:53.003846  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.003857  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:53.003864  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:53.003927  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:53.039133  165698 cri.go:89] found id: ""
	I0617 12:05:53.039161  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.039169  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:53.039175  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:53.039229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:53.077703  165698 cri.go:89] found id: ""
	I0617 12:05:53.077756  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.077773  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:53.077780  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:53.077831  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:53.119187  165698 cri.go:89] found id: ""
	I0617 12:05:53.119216  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.119223  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:53.119230  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:53.119287  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:53.154423  165698 cri.go:89] found id: ""
	I0617 12:05:53.154457  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.154467  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:53.154480  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:53.154496  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:53.202745  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:53.202778  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:53.216510  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:53.216537  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:53.295687  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:53.295712  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:53.295732  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:53.375064  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:53.375095  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:55.915113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:55.929155  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:55.929239  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:55.964589  165698 cri.go:89] found id: ""
	I0617 12:05:55.964625  165698 logs.go:276] 0 containers: []
	W0617 12:05:55.964634  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:55.964640  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:55.964702  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:56.003659  165698 cri.go:89] found id: ""
	I0617 12:05:56.003691  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.003701  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:56.003709  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:56.003778  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:56.039674  165698 cri.go:89] found id: ""
	I0617 12:05:56.039707  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.039717  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:56.039724  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:56.039786  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:56.077695  165698 cri.go:89] found id: ""
	I0617 12:05:56.077736  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.077748  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:56.077756  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:56.077826  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:56.116397  165698 cri.go:89] found id: ""
	I0617 12:05:56.116430  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.116442  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:56.116451  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:56.116512  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:56.152395  165698 cri.go:89] found id: ""
	I0617 12:05:56.152433  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.152445  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:56.152454  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:56.152513  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:56.189740  165698 cri.go:89] found id: ""
	I0617 12:05:56.189776  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.189788  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:56.189796  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:56.189866  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:56.228017  165698 cri.go:89] found id: ""
	I0617 12:05:56.228047  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.228055  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:56.228063  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:56.228076  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:56.279032  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:56.279079  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:56.294369  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:56.294394  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:56.369507  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:56.369535  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:56.369551  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:56.454797  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:56.454833  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:54.725303  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:56.726247  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:56.726280  165060 pod_ready.go:81] duration metric: took 4m0.008373114s for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	E0617 12:05:56.726291  165060 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0617 12:05:56.726298  165060 pod_ready.go:38] duration metric: took 4m3.608691328s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:05:56.726315  165060 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:05:56.726352  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:56.726411  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:56.784765  165060 cri.go:89] found id: "5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:05:56.784792  165060 cri.go:89] found id: ""
	I0617 12:05:56.784803  165060 logs.go:276] 1 containers: [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3]
	I0617 12:05:56.784865  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.791125  165060 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:56.791189  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:56.830691  165060 cri.go:89] found id: "fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:05:56.830715  165060 cri.go:89] found id: ""
	I0617 12:05:56.830725  165060 logs.go:276] 1 containers: [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9]
	I0617 12:05:56.830785  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.836214  165060 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:56.836282  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:56.875812  165060 cri.go:89] found id: "c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:05:56.875830  165060 cri.go:89] found id: ""
	I0617 12:05:56.875837  165060 logs.go:276] 1 containers: [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7]
	I0617 12:05:56.875891  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.880190  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:56.880247  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:56.925155  165060 cri.go:89] found id: "157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:05:56.925178  165060 cri.go:89] found id: ""
	I0617 12:05:56.925186  165060 logs.go:276] 1 containers: [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d]
	I0617 12:05:56.925231  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.930317  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:56.930384  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:56.972479  165060 cri.go:89] found id: "c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:05:56.972503  165060 cri.go:89] found id: ""
	I0617 12:05:56.972512  165060 logs.go:276] 1 containers: [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d]
	I0617 12:05:56.972559  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.977635  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:56.977696  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:57.012791  165060 cri.go:89] found id: "2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:05:57.012816  165060 cri.go:89] found id: ""
	I0617 12:05:57.012826  165060 logs.go:276] 1 containers: [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079]
	I0617 12:05:57.012882  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:57.016856  165060 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:57.016909  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:57.052111  165060 cri.go:89] found id: ""
	I0617 12:05:57.052146  165060 logs.go:276] 0 containers: []
	W0617 12:05:57.052156  165060 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:57.052163  165060 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:05:57.052211  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:05:57.094600  165060 cri.go:89] found id: "02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:05:57.094619  165060 cri.go:89] found id: "7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:05:57.094622  165060 cri.go:89] found id: ""
	I0617 12:05:57.094630  165060 logs.go:276] 2 containers: [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36]
	I0617 12:05:57.094700  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:57.099250  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:57.104252  165060 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:57.104281  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:57.162000  165060 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:57.162027  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:05:57.285448  165060 logs.go:123] Gathering logs for etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] ...
	I0617 12:05:57.285490  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:05:57.340781  165060 logs.go:123] Gathering logs for coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] ...
	I0617 12:05:57.340820  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:05:57.383507  165060 logs.go:123] Gathering logs for kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] ...
	I0617 12:05:57.383540  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:05:57.428747  165060 logs.go:123] Gathering logs for kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] ...
	I0617 12:05:57.428792  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:05:57.468739  165060 logs.go:123] Gathering logs for kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] ...
	I0617 12:05:57.468770  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:05:57.531317  165060 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:57.531355  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:58.063787  165060 logs.go:123] Gathering logs for container status ...
	I0617 12:05:58.063838  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:58.129384  165060 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:58.129416  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:58.144078  165060 logs.go:123] Gathering logs for kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] ...
	I0617 12:05:58.144152  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:05:58.189028  165060 logs.go:123] Gathering logs for storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] ...
	I0617 12:05:58.189068  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:05:58.227144  165060 logs.go:123] Gathering logs for storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] ...
	I0617 12:05:58.227178  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:05:54.838580  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:57.333884  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:59.836198  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:01.837155  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:58.995221  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:59.008481  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:59.008555  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:59.043854  165698 cri.go:89] found id: ""
	I0617 12:05:59.043887  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.043914  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:59.043935  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:59.044003  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:59.081488  165698 cri.go:89] found id: ""
	I0617 12:05:59.081522  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.081530  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:59.081537  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:59.081596  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:59.118193  165698 cri.go:89] found id: ""
	I0617 12:05:59.118222  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.118232  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:59.118240  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:59.118306  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:59.150286  165698 cri.go:89] found id: ""
	I0617 12:05:59.150315  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.150327  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:59.150335  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:59.150381  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:59.191426  165698 cri.go:89] found id: ""
	I0617 12:05:59.191450  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.191485  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:59.191493  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:59.191547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:59.224933  165698 cri.go:89] found id: ""
	I0617 12:05:59.224965  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.224974  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:59.224998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:59.225061  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:59.255929  165698 cri.go:89] found id: ""
	I0617 12:05:59.255956  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.255965  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:59.255971  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:59.256025  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:59.293072  165698 cri.go:89] found id: ""
	I0617 12:05:59.293097  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.293104  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:59.293114  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:59.293126  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:59.354240  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:59.354267  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:59.367715  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:59.367744  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:59.446352  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:59.446381  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:59.446396  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:59.528701  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:59.528738  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:02.071616  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:02.088050  165698 kubeadm.go:591] duration metric: took 4m3.493743262s to restartPrimaryControlPlane
	W0617 12:06:02.088159  165698 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0617 12:06:02.088194  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:06:02.552133  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:02.570136  165698 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:06:02.582299  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:06:02.594775  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:06:02.594809  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:06:02.594867  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:06:02.605875  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:06:02.605954  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:06:02.617780  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:06:02.628284  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:06:02.628359  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:06:02.639128  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:06:02.650079  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:06:02.650144  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:06:02.660879  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:06:02.671170  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:06:02.671249  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:06:02.682071  165698 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:06:02.753750  165698 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0617 12:06:02.753913  165698 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:06:02.897384  165698 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:06:02.897530  165698 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:06:02.897685  165698 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:06:03.079116  165698 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:06:00.764533  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:00.781564  165060 api_server.go:72] duration metric: took 4m14.875617542s to wait for apiserver process to appear ...
	I0617 12:06:00.781593  165060 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:06:00.781642  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:00.781706  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:00.817980  165060 cri.go:89] found id: "5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:00.818013  165060 cri.go:89] found id: ""
	I0617 12:06:00.818024  165060 logs.go:276] 1 containers: [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3]
	I0617 12:06:00.818080  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.822664  165060 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:00.822759  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:00.861518  165060 cri.go:89] found id: "fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:00.861545  165060 cri.go:89] found id: ""
	I0617 12:06:00.861556  165060 logs.go:276] 1 containers: [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9]
	I0617 12:06:00.861614  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.865885  165060 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:00.865973  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:00.900844  165060 cri.go:89] found id: "c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:00.900864  165060 cri.go:89] found id: ""
	I0617 12:06:00.900875  165060 logs.go:276] 1 containers: [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7]
	I0617 12:06:00.900930  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.905253  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:00.905317  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:00.938998  165060 cri.go:89] found id: "157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:00.939036  165060 cri.go:89] found id: ""
	I0617 12:06:00.939046  165060 logs.go:276] 1 containers: [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d]
	I0617 12:06:00.939114  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.943170  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:00.943234  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:00.982923  165060 cri.go:89] found id: "c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:00.982953  165060 cri.go:89] found id: ""
	I0617 12:06:00.982964  165060 logs.go:276] 1 containers: [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d]
	I0617 12:06:00.983034  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.987696  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:00.987769  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:01.033789  165060 cri.go:89] found id: "2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:06:01.033825  165060 cri.go:89] found id: ""
	I0617 12:06:01.033837  165060 logs.go:276] 1 containers: [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079]
	I0617 12:06:01.033901  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:01.038800  165060 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:01.038861  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:01.077797  165060 cri.go:89] found id: ""
	I0617 12:06:01.077834  165060 logs.go:276] 0 containers: []
	W0617 12:06:01.077846  165060 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:01.077855  165060 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:01.077916  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:01.116275  165060 cri.go:89] found id: "02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:01.116296  165060 cri.go:89] found id: "7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:01.116303  165060 cri.go:89] found id: ""
	I0617 12:06:01.116311  165060 logs.go:276] 2 containers: [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36]
	I0617 12:06:01.116365  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:01.121088  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:01.125393  165060 logs.go:123] Gathering logs for container status ...
	I0617 12:06:01.125417  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:01.170817  165060 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:01.170844  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:01.223072  165060 logs.go:123] Gathering logs for kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] ...
	I0617 12:06:01.223114  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:01.269212  165060 logs.go:123] Gathering logs for kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] ...
	I0617 12:06:01.269245  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:01.313518  165060 logs.go:123] Gathering logs for kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] ...
	I0617 12:06:01.313557  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:01.357935  165060 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:01.357965  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:01.784493  165060 logs.go:123] Gathering logs for storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] ...
	I0617 12:06:01.784542  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:01.825824  165060 logs.go:123] Gathering logs for storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] ...
	I0617 12:06:01.825851  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:01.866216  165060 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:01.866252  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:01.881292  165060 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:01.881316  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:02.000026  165060 logs.go:123] Gathering logs for etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] ...
	I0617 12:06:02.000063  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:02.043491  165060 logs.go:123] Gathering logs for coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] ...
	I0617 12:06:02.043524  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:02.081957  165060 logs.go:123] Gathering logs for kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] ...
	I0617 12:06:02.081984  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:05:59.835769  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:02.332739  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:03.080903  165698 out.go:204]   - Generating certificates and keys ...
	I0617 12:06:03.081006  165698 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:06:03.081080  165698 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:06:03.081168  165698 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:06:03.081250  165698 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:06:03.081377  165698 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:06:03.081457  165698 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:06:03.082418  165698 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:06:03.083003  165698 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:06:03.083917  165698 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:06:03.084820  165698 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:06:03.085224  165698 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:06:03.085307  165698 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:06:03.203342  165698 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:06:03.430428  165698 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:06:03.570422  165698 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:06:03.772092  165698 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:06:03.793105  165698 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:06:03.793206  165698 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:06:03.793261  165698 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:06:03.919738  165698 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:06:04.333408  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:06.333963  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:03.921593  165698 out.go:204]   - Booting up control plane ...
	I0617 12:06:03.921708  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:06:03.928168  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:06:03.928279  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:06:03.937197  165698 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:06:03.939967  165698 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 12:06:04.644102  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:06:04.648733  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 200:
	ok
	I0617 12:06:04.649862  165060 api_server.go:141] control plane version: v1.30.1
	I0617 12:06:04.649894  165060 api_server.go:131] duration metric: took 3.86829173s to wait for apiserver health ...
	I0617 12:06:04.649905  165060 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:06:04.649936  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:04.649997  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:04.688904  165060 cri.go:89] found id: "5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:04.688923  165060 cri.go:89] found id: ""
	I0617 12:06:04.688931  165060 logs.go:276] 1 containers: [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3]
	I0617 12:06:04.688975  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.695049  165060 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:04.695110  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:04.730292  165060 cri.go:89] found id: "fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:04.730314  165060 cri.go:89] found id: ""
	I0617 12:06:04.730322  165060 logs.go:276] 1 containers: [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9]
	I0617 12:06:04.730373  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.734432  165060 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:04.734486  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:04.771401  165060 cri.go:89] found id: "c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:04.771418  165060 cri.go:89] found id: ""
	I0617 12:06:04.771426  165060 logs.go:276] 1 containers: [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7]
	I0617 12:06:04.771496  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.775822  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:04.775876  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:04.816111  165060 cri.go:89] found id: "157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:04.816131  165060 cri.go:89] found id: ""
	I0617 12:06:04.816139  165060 logs.go:276] 1 containers: [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d]
	I0617 12:06:04.816185  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.820614  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:04.820672  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:04.865387  165060 cri.go:89] found id: "c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:04.865411  165060 cri.go:89] found id: ""
	I0617 12:06:04.865421  165060 logs.go:276] 1 containers: [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d]
	I0617 12:06:04.865479  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.870192  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:04.870263  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:04.912698  165060 cri.go:89] found id: "2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:06:04.912723  165060 cri.go:89] found id: ""
	I0617 12:06:04.912734  165060 logs.go:276] 1 containers: [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079]
	I0617 12:06:04.912796  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.917484  165060 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:04.917563  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:04.954076  165060 cri.go:89] found id: ""
	I0617 12:06:04.954109  165060 logs.go:276] 0 containers: []
	W0617 12:06:04.954120  165060 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:04.954129  165060 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:04.954196  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:04.995832  165060 cri.go:89] found id: "02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:04.995858  165060 cri.go:89] found id: "7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:04.995862  165060 cri.go:89] found id: ""
	I0617 12:06:04.995869  165060 logs.go:276] 2 containers: [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36]
	I0617 12:06:04.995928  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:05.000741  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:05.004995  165060 logs.go:123] Gathering logs for storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] ...
	I0617 12:06:05.005026  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:05.040651  165060 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:05.040692  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:05.461644  165060 logs.go:123] Gathering logs for container status ...
	I0617 12:06:05.461685  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:05.508706  165060 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:05.508733  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:05.562418  165060 logs.go:123] Gathering logs for kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] ...
	I0617 12:06:05.562461  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:05.606489  165060 logs.go:123] Gathering logs for etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] ...
	I0617 12:06:05.606527  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:05.651719  165060 logs.go:123] Gathering logs for coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] ...
	I0617 12:06:05.651753  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:05.688736  165060 logs.go:123] Gathering logs for kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] ...
	I0617 12:06:05.688772  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:05.730649  165060 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:05.730679  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:05.745482  165060 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:05.745511  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:05.849002  165060 logs.go:123] Gathering logs for kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] ...
	I0617 12:06:05.849025  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:05.890802  165060 logs.go:123] Gathering logs for kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] ...
	I0617 12:06:05.890836  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:06:05.946444  165060 logs.go:123] Gathering logs for storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] ...
	I0617 12:06:05.946474  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:04.332977  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:06.834683  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:08.489561  165060 system_pods.go:59] 8 kube-system pods found
	I0617 12:06:08.489593  165060 system_pods.go:61] "coredns-7db6d8ff4d-9bbjg" [1ba0eee5-436e-4c83-b5ce-3c907d66b641] Running
	I0617 12:06:08.489597  165060 system_pods.go:61] "etcd-embed-certs-136195" [6dc81a80-c56b-4517-af82-c450cf9578f5] Running
	I0617 12:06:08.489601  165060 system_pods.go:61] "kube-apiserver-embed-certs-136195" [bd61a715-2471-4dca-aa48-a157531ebd6b] Running
	I0617 12:06:08.489605  165060 system_pods.go:61] "kube-controller-manager-embed-certs-136195" [194db4b0-75c2-4905-8e4d-813185497b51] Running
	I0617 12:06:08.489607  165060 system_pods.go:61] "kube-proxy-25d5n" [52b6d09a-899f-40c4-b1f3-7842ae755165] Running
	I0617 12:06:08.489610  165060 system_pods.go:61] "kube-scheduler-embed-certs-136195" [b04d3798-f465-4f82-9ec7-777ea62d5b94] Running
	I0617 12:06:08.489616  165060 system_pods.go:61] "metrics-server-569cc877fc-dmhfs" [31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:08.489620  165060 system_pods.go:61] "storage-provisioner" [4b04a38a-5006-4496-a24d-0940029193de] Running
	I0617 12:06:08.489626  165060 system_pods.go:74] duration metric: took 3.839715717s to wait for pod list to return data ...
	I0617 12:06:08.489633  165060 default_sa.go:34] waiting for default service account to be created ...
	I0617 12:06:08.491984  165060 default_sa.go:45] found service account: "default"
	I0617 12:06:08.492007  165060 default_sa.go:55] duration metric: took 2.365306ms for default service account to be created ...
	I0617 12:06:08.492014  165060 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 12:06:08.497834  165060 system_pods.go:86] 8 kube-system pods found
	I0617 12:06:08.497865  165060 system_pods.go:89] "coredns-7db6d8ff4d-9bbjg" [1ba0eee5-436e-4c83-b5ce-3c907d66b641] Running
	I0617 12:06:08.497873  165060 system_pods.go:89] "etcd-embed-certs-136195" [6dc81a80-c56b-4517-af82-c450cf9578f5] Running
	I0617 12:06:08.497880  165060 system_pods.go:89] "kube-apiserver-embed-certs-136195" [bd61a715-2471-4dca-aa48-a157531ebd6b] Running
	I0617 12:06:08.497887  165060 system_pods.go:89] "kube-controller-manager-embed-certs-136195" [194db4b0-75c2-4905-8e4d-813185497b51] Running
	I0617 12:06:08.497891  165060 system_pods.go:89] "kube-proxy-25d5n" [52b6d09a-899f-40c4-b1f3-7842ae755165] Running
	I0617 12:06:08.497899  165060 system_pods.go:89] "kube-scheduler-embed-certs-136195" [b04d3798-f465-4f82-9ec7-777ea62d5b94] Running
	I0617 12:06:08.497905  165060 system_pods.go:89] "metrics-server-569cc877fc-dmhfs" [31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:08.497914  165060 system_pods.go:89] "storage-provisioner" [4b04a38a-5006-4496-a24d-0940029193de] Running
	I0617 12:06:08.497921  165060 system_pods.go:126] duration metric: took 5.901391ms to wait for k8s-apps to be running ...
	I0617 12:06:08.497927  165060 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 12:06:08.497970  165060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:08.520136  165060 system_svc.go:56] duration metric: took 22.203601ms WaitForService to wait for kubelet
	I0617 12:06:08.520159  165060 kubeadm.go:576] duration metric: took 4m22.614222011s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:06:08.520178  165060 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:06:08.522704  165060 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:06:08.522741  165060 node_conditions.go:123] node cpu capacity is 2
	I0617 12:06:08.522758  165060 node_conditions.go:105] duration metric: took 2.57391ms to run NodePressure ...
	I0617 12:06:08.522773  165060 start.go:240] waiting for startup goroutines ...
	I0617 12:06:08.522787  165060 start.go:245] waiting for cluster config update ...
	I0617 12:06:08.522803  165060 start.go:254] writing updated cluster config ...
	I0617 12:06:08.523139  165060 ssh_runner.go:195] Run: rm -f paused
	I0617 12:06:08.577942  165060 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 12:06:08.579946  165060 out.go:177] * Done! kubectl is now configured to use "embed-certs-136195" cluster and "default" namespace by default
	I0617 12:06:08.334463  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:10.335642  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:09.331628  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:11.332586  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:13.332703  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:12.834827  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:15.334721  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:15.333004  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:17.834357  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:17.833756  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:19.835364  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:22.333742  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:20.332127  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:22.832111  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:24.333945  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:26.335021  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:25.332366  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:27.835364  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:28.833758  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:31.334155  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:29.835500  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:32.332236  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:33.833599  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:35.834190  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:34.831122  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:36.833202  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:38.334352  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:40.335399  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:40.335423  166103 pod_ready.go:81] duration metric: took 4m0.008367222s for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	E0617 12:06:40.335433  166103 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0617 12:06:40.335441  166103 pod_ready.go:38] duration metric: took 4m7.419505963s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:06:40.335475  166103 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:06:40.335505  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:40.335556  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:40.400354  166103 cri.go:89] found id: "5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:40.400384  166103 cri.go:89] found id: ""
	I0617 12:06:40.400394  166103 logs.go:276] 1 containers: [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b]
	I0617 12:06:40.400453  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.405124  166103 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:40.405186  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:40.440583  166103 cri.go:89] found id: "8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:40.440610  166103 cri.go:89] found id: ""
	I0617 12:06:40.440619  166103 logs.go:276] 1 containers: [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862]
	I0617 12:06:40.440665  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.445086  166103 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:40.445141  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:40.489676  166103 cri.go:89] found id: "26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:40.489698  166103 cri.go:89] found id: ""
	I0617 12:06:40.489706  166103 logs.go:276] 1 containers: [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323]
	I0617 12:06:40.489752  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.494402  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:40.494514  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:40.535486  166103 cri.go:89] found id: "2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:40.535517  166103 cri.go:89] found id: ""
	I0617 12:06:40.535527  166103 logs.go:276] 1 containers: [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b]
	I0617 12:06:40.535589  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.543265  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:40.543330  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:40.579564  166103 cri.go:89] found id: "63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:40.579588  166103 cri.go:89] found id: ""
	I0617 12:06:40.579598  166103 logs.go:276] 1 containers: [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da]
	I0617 12:06:40.579658  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.583865  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:40.583928  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:40.642408  166103 cri.go:89] found id: "36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:40.642435  166103 cri.go:89] found id: ""
	I0617 12:06:40.642445  166103 logs.go:276] 1 containers: [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685]
	I0617 12:06:40.642509  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.647892  166103 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:40.647959  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:40.698654  166103 cri.go:89] found id: ""
	I0617 12:06:40.698686  166103 logs.go:276] 0 containers: []
	W0617 12:06:40.698696  166103 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:40.698704  166103 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:40.698768  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:40.749641  166103 cri.go:89] found id: "adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:40.749663  166103 cri.go:89] found id: "e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:40.749668  166103 cri.go:89] found id: ""
	I0617 12:06:40.749678  166103 logs.go:276] 2 containers: [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc]
	I0617 12:06:40.749742  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.754926  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.760126  166103 logs.go:123] Gathering logs for container status ...
	I0617 12:06:40.760152  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:40.804119  166103 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:40.804159  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:40.942459  166103 logs.go:123] Gathering logs for etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] ...
	I0617 12:06:40.942495  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:40.994721  166103 logs.go:123] Gathering logs for coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] ...
	I0617 12:06:40.994761  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:41.037005  166103 logs.go:123] Gathering logs for kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] ...
	I0617 12:06:41.037040  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:41.080715  166103 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:41.080751  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:41.606478  166103 logs.go:123] Gathering logs for storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] ...
	I0617 12:06:41.606516  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:41.643963  166103 logs.go:123] Gathering logs for storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] ...
	I0617 12:06:41.644003  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:41.683405  166103 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:41.683443  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:41.737365  166103 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:41.737400  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:41.752552  166103 logs.go:123] Gathering logs for kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] ...
	I0617 12:06:41.752582  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:41.804447  166103 logs.go:123] Gathering logs for kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] ...
	I0617 12:06:41.804480  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:41.847266  166103 logs.go:123] Gathering logs for kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] ...
	I0617 12:06:41.847302  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:39.333111  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:41.836327  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:44.408776  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:44.427500  166103 api_server.go:72] duration metric: took 4m19.25316479s to wait for apiserver process to appear ...
	I0617 12:06:44.427531  166103 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:06:44.427577  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:44.427634  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:44.466379  166103 cri.go:89] found id: "5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:44.466408  166103 cri.go:89] found id: ""
	I0617 12:06:44.466418  166103 logs.go:276] 1 containers: [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b]
	I0617 12:06:44.466481  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.470832  166103 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:44.470901  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:44.511689  166103 cri.go:89] found id: "8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:44.511713  166103 cri.go:89] found id: ""
	I0617 12:06:44.511722  166103 logs.go:276] 1 containers: [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862]
	I0617 12:06:44.511769  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.516221  166103 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:44.516303  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:44.560612  166103 cri.go:89] found id: "26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:44.560634  166103 cri.go:89] found id: ""
	I0617 12:06:44.560642  166103 logs.go:276] 1 containers: [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323]
	I0617 12:06:44.560695  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.564998  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:44.565068  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:44.600133  166103 cri.go:89] found id: "2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:44.600155  166103 cri.go:89] found id: ""
	I0617 12:06:44.600164  166103 logs.go:276] 1 containers: [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b]
	I0617 12:06:44.600220  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.605431  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:44.605494  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:44.648647  166103 cri.go:89] found id: "63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:44.648678  166103 cri.go:89] found id: ""
	I0617 12:06:44.648688  166103 logs.go:276] 1 containers: [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da]
	I0617 12:06:44.648758  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.653226  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:44.653307  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:44.701484  166103 cri.go:89] found id: "36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:44.701508  166103 cri.go:89] found id: ""
	I0617 12:06:44.701516  166103 logs.go:276] 1 containers: [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685]
	I0617 12:06:44.701572  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.707827  166103 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:44.707890  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:44.752362  166103 cri.go:89] found id: ""
	I0617 12:06:44.752391  166103 logs.go:276] 0 containers: []
	W0617 12:06:44.752402  166103 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:44.752410  166103 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:44.752473  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:44.798926  166103 cri.go:89] found id: "adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:44.798955  166103 cri.go:89] found id: "e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:44.798961  166103 cri.go:89] found id: ""
	I0617 12:06:44.798970  166103 logs.go:276] 2 containers: [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc]
	I0617 12:06:44.799038  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.804702  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.810673  166103 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:44.810702  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:44.939596  166103 logs.go:123] Gathering logs for etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] ...
	I0617 12:06:44.939627  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:44.987902  166103 logs.go:123] Gathering logs for coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] ...
	I0617 12:06:44.987936  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:45.023931  166103 logs.go:123] Gathering logs for kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] ...
	I0617 12:06:45.023962  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:45.060432  166103 logs.go:123] Gathering logs for storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] ...
	I0617 12:06:45.060468  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:45.095643  166103 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:45.095679  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:45.553973  166103 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:45.554018  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:45.611997  166103 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:45.612036  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:45.626973  166103 logs.go:123] Gathering logs for container status ...
	I0617 12:06:45.627002  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:45.671119  166103 logs.go:123] Gathering logs for kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] ...
	I0617 12:06:45.671151  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:45.728097  166103 logs.go:123] Gathering logs for storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] ...
	I0617 12:06:45.728133  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:45.765586  166103 logs.go:123] Gathering logs for kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] ...
	I0617 12:06:45.765615  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:45.818347  166103 logs.go:123] Gathering logs for kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] ...
	I0617 12:06:45.818387  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:43.941225  165698 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0617 12:06:43.941341  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:43.941612  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:06:44.331481  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:46.831820  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:48.362826  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:06:48.366936  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 200:
	ok
	I0617 12:06:48.367973  166103 api_server.go:141] control plane version: v1.30.1
	I0617 12:06:48.367992  166103 api_server.go:131] duration metric: took 3.940452539s to wait for apiserver health ...
	I0617 12:06:48.367999  166103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:06:48.368021  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:48.368066  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:48.404797  166103 cri.go:89] found id: "5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:48.404819  166103 cri.go:89] found id: ""
	I0617 12:06:48.404828  166103 logs.go:276] 1 containers: [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b]
	I0617 12:06:48.404887  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.409105  166103 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:48.409162  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:48.456233  166103 cri.go:89] found id: "8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:48.456266  166103 cri.go:89] found id: ""
	I0617 12:06:48.456277  166103 logs.go:276] 1 containers: [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862]
	I0617 12:06:48.456336  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.460550  166103 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:48.460625  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:48.498447  166103 cri.go:89] found id: "26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:48.498472  166103 cri.go:89] found id: ""
	I0617 12:06:48.498481  166103 logs.go:276] 1 containers: [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323]
	I0617 12:06:48.498564  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.503826  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:48.503906  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:48.554405  166103 cri.go:89] found id: "2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:48.554435  166103 cri.go:89] found id: ""
	I0617 12:06:48.554446  166103 logs.go:276] 1 containers: [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b]
	I0617 12:06:48.554504  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.559175  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:48.559240  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:48.596764  166103 cri.go:89] found id: "63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:48.596791  166103 cri.go:89] found id: ""
	I0617 12:06:48.596801  166103 logs.go:276] 1 containers: [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da]
	I0617 12:06:48.596863  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.601197  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:48.601260  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:48.654027  166103 cri.go:89] found id: "36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:48.654053  166103 cri.go:89] found id: ""
	I0617 12:06:48.654061  166103 logs.go:276] 1 containers: [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685]
	I0617 12:06:48.654113  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.659492  166103 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:48.659557  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:48.706749  166103 cri.go:89] found id: ""
	I0617 12:06:48.706777  166103 logs.go:276] 0 containers: []
	W0617 12:06:48.706786  166103 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:48.706794  166103 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:48.706859  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:48.750556  166103 cri.go:89] found id: "adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:48.750588  166103 cri.go:89] found id: "e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:48.750594  166103 cri.go:89] found id: ""
	I0617 12:06:48.750607  166103 logs.go:276] 2 containers: [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc]
	I0617 12:06:48.750671  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.755368  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.760128  166103 logs.go:123] Gathering logs for kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] ...
	I0617 12:06:48.760154  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:48.802187  166103 logs.go:123] Gathering logs for etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] ...
	I0617 12:06:48.802224  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:48.861041  166103 logs.go:123] Gathering logs for kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] ...
	I0617 12:06:48.861076  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:48.917864  166103 logs.go:123] Gathering logs for storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] ...
	I0617 12:06:48.917902  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:48.963069  166103 logs.go:123] Gathering logs for container status ...
	I0617 12:06:48.963099  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:49.012109  166103 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:49.012149  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:49.119880  166103 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:49.119915  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:49.136461  166103 logs.go:123] Gathering logs for coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] ...
	I0617 12:06:49.136497  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:49.177339  166103 logs.go:123] Gathering logs for kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] ...
	I0617 12:06:49.177377  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:49.219101  166103 logs.go:123] Gathering logs for kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] ...
	I0617 12:06:49.219135  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:49.256646  166103 logs.go:123] Gathering logs for storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] ...
	I0617 12:06:49.256687  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:49.302208  166103 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:49.302243  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:49.653713  166103 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:49.653758  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:52.217069  166103 system_pods.go:59] 8 kube-system pods found
	I0617 12:06:52.217102  166103 system_pods.go:61] "coredns-7db6d8ff4d-mnw24" [1e6c4ff3-f0dc-43da-abd8-baaed7dca40c] Running
	I0617 12:06:52.217107  166103 system_pods.go:61] "etcd-default-k8s-diff-port-991309" [820a4f27-cf83-4edb-a2ea-edba6673d851] Running
	I0617 12:06:52.217111  166103 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991309" [26e6c19d-6f70-4924-83f5-563c8508c9e3] Running
	I0617 12:06:52.217115  166103 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991309" [01e7c468-98a6-48f3-a158-59e97fa8279c] Running
	I0617 12:06:52.217119  166103 system_pods.go:61] "kube-proxy-jn5kp" [d6935148-7ee8-4655-8327-9f1ee4c933de] Running
	I0617 12:06:52.217122  166103 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991309" [53ecd22c-05cf-48a5-b7e5-925392085f7a] Running
	I0617 12:06:52.217128  166103 system_pods.go:61] "metrics-server-569cc877fc-n2svp" [5b637d97-3183-4324-98cf-dd69a2968578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:52.217134  166103 system_pods.go:61] "storage-provisioner" [92b20aec-29c2-4256-86be-7f58f66585dd] Running
	I0617 12:06:52.217145  166103 system_pods.go:74] duration metric: took 3.849140024s to wait for pod list to return data ...
	I0617 12:06:52.217152  166103 default_sa.go:34] waiting for default service account to be created ...
	I0617 12:06:52.219308  166103 default_sa.go:45] found service account: "default"
	I0617 12:06:52.219330  166103 default_sa.go:55] duration metric: took 2.172323ms for default service account to be created ...
	I0617 12:06:52.219339  166103 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 12:06:52.224239  166103 system_pods.go:86] 8 kube-system pods found
	I0617 12:06:52.224265  166103 system_pods.go:89] "coredns-7db6d8ff4d-mnw24" [1e6c4ff3-f0dc-43da-abd8-baaed7dca40c] Running
	I0617 12:06:52.224270  166103 system_pods.go:89] "etcd-default-k8s-diff-port-991309" [820a4f27-cf83-4edb-a2ea-edba6673d851] Running
	I0617 12:06:52.224276  166103 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-991309" [26e6c19d-6f70-4924-83f5-563c8508c9e3] Running
	I0617 12:06:52.224280  166103 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-991309" [01e7c468-98a6-48f3-a158-59e97fa8279c] Running
	I0617 12:06:52.224284  166103 system_pods.go:89] "kube-proxy-jn5kp" [d6935148-7ee8-4655-8327-9f1ee4c933de] Running
	I0617 12:06:52.224288  166103 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-991309" [53ecd22c-05cf-48a5-b7e5-925392085f7a] Running
	I0617 12:06:52.224299  166103 system_pods.go:89] "metrics-server-569cc877fc-n2svp" [5b637d97-3183-4324-98cf-dd69a2968578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:52.224305  166103 system_pods.go:89] "storage-provisioner" [92b20aec-29c2-4256-86be-7f58f66585dd] Running
	I0617 12:06:52.224319  166103 system_pods.go:126] duration metric: took 4.973603ms to wait for k8s-apps to be running ...
	I0617 12:06:52.224332  166103 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 12:06:52.224380  166103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:52.241121  166103 system_svc.go:56] duration metric: took 16.776061ms WaitForService to wait for kubelet
	I0617 12:06:52.241156  166103 kubeadm.go:576] duration metric: took 4m27.066827271s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:06:52.241181  166103 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:06:52.245359  166103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:06:52.245407  166103 node_conditions.go:123] node cpu capacity is 2
	I0617 12:06:52.245423  166103 node_conditions.go:105] duration metric: took 4.235898ms to run NodePressure ...
	I0617 12:06:52.245440  166103 start.go:240] waiting for startup goroutines ...
	I0617 12:06:52.245449  166103 start.go:245] waiting for cluster config update ...
	I0617 12:06:52.245462  166103 start.go:254] writing updated cluster config ...
	I0617 12:06:52.245969  166103 ssh_runner.go:195] Run: rm -f paused
	I0617 12:06:52.299326  166103 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 12:06:52.301413  166103 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-991309" cluster and "default" namespace by default
	I0617 12:06:48.942159  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:48.942434  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:06:48.835113  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:51.331395  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:53.331551  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:55.332455  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:57.835143  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:58.942977  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:58.943290  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:00.331823  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:02.332214  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:04.831284  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:06.832082  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:07.325414  164809 pod_ready.go:81] duration metric: took 4m0.000322555s for pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace to be "Ready" ...
	E0617 12:07:07.325446  164809 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0617 12:07:07.325464  164809 pod_ready.go:38] duration metric: took 4m12.035995337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:07:07.325494  164809 kubeadm.go:591] duration metric: took 4m19.041266463s to restartPrimaryControlPlane
	W0617 12:07:07.325556  164809 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0617 12:07:07.325587  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:07:18.944149  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:07:18.944368  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:38.980378  164809 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.654762508s)
	I0617 12:07:38.980451  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:07:38.997845  164809 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:07:39.009456  164809 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:07:39.020407  164809 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:07:39.020430  164809 kubeadm.go:156] found existing configuration files:
	
	I0617 12:07:39.020472  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:07:39.030323  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:07:39.030376  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:07:39.040298  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:07:39.049715  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:07:39.049757  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:07:39.060493  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:07:39.069921  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:07:39.069973  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:07:39.080049  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:07:39.089524  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:07:39.089569  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:07:39.099082  164809 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:07:39.154963  164809 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0617 12:07:39.155083  164809 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:07:39.286616  164809 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:07:39.286809  164809 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:07:39.286977  164809 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:07:39.487542  164809 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:07:39.489554  164809 out.go:204]   - Generating certificates and keys ...
	I0617 12:07:39.489665  164809 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:07:39.489732  164809 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:07:39.489855  164809 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:07:39.489969  164809 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:07:39.490088  164809 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:07:39.490187  164809 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:07:39.490274  164809 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:07:39.490386  164809 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:07:39.490508  164809 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:07:39.490643  164809 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:07:39.490750  164809 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:07:39.490849  164809 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:07:39.565788  164809 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:07:39.643443  164809 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0617 12:07:39.765615  164809 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:07:39.851182  164809 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:07:40.041938  164809 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:07:40.042576  164809 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:07:40.045112  164809 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:07:40.047144  164809 out.go:204]   - Booting up control plane ...
	I0617 12:07:40.047265  164809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:07:40.047374  164809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:07:40.047995  164809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:07:40.070163  164809 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:07:40.071308  164809 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:07:40.071415  164809 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:07:40.204578  164809 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0617 12:07:40.204698  164809 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0617 12:07:41.210782  164809 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.0065421s
	I0617 12:07:41.210902  164809 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0617 12:07:45.713194  164809 kubeadm.go:309] [api-check] The API server is healthy after 4.501871798s
	I0617 12:07:45.735311  164809 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0617 12:07:45.760405  164809 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0617 12:07:45.795429  164809 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0617 12:07:45.795770  164809 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-152830 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0617 12:07:45.816446  164809 kubeadm.go:309] [bootstrap-token] Using token: ryfqxd.olkegn8a1unpvnbq
	I0617 12:07:45.817715  164809 out.go:204]   - Configuring RBAC rules ...
	I0617 12:07:45.817890  164809 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0617 12:07:45.826422  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0617 12:07:45.852291  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0617 12:07:45.867538  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0617 12:07:45.880697  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0617 12:07:45.887707  164809 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0617 12:07:46.120211  164809 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0617 12:07:46.593168  164809 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0617 12:07:47.119377  164809 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0617 12:07:47.120840  164809 kubeadm.go:309] 
	I0617 12:07:47.120933  164809 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0617 12:07:47.120947  164809 kubeadm.go:309] 
	I0617 12:07:47.121057  164809 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0617 12:07:47.121069  164809 kubeadm.go:309] 
	I0617 12:07:47.121123  164809 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0617 12:07:47.124361  164809 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0617 12:07:47.124443  164809 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0617 12:07:47.124464  164809 kubeadm.go:309] 
	I0617 12:07:47.124538  164809 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0617 12:07:47.124550  164809 kubeadm.go:309] 
	I0617 12:07:47.124607  164809 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0617 12:07:47.124617  164809 kubeadm.go:309] 
	I0617 12:07:47.124724  164809 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0617 12:07:47.124838  164809 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0617 12:07:47.124938  164809 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0617 12:07:47.124949  164809 kubeadm.go:309] 
	I0617 12:07:47.125085  164809 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0617 12:07:47.125191  164809 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0617 12:07:47.125203  164809 kubeadm.go:309] 
	I0617 12:07:47.125343  164809 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ryfqxd.olkegn8a1unpvnbq \
	I0617 12:07:47.125479  164809 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 \
	I0617 12:07:47.125510  164809 kubeadm.go:309] 	--control-plane 
	I0617 12:07:47.125518  164809 kubeadm.go:309] 
	I0617 12:07:47.125616  164809 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0617 12:07:47.125627  164809 kubeadm.go:309] 
	I0617 12:07:47.125724  164809 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ryfqxd.olkegn8a1unpvnbq \
	I0617 12:07:47.125852  164809 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 
	I0617 12:07:47.126915  164809 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:07:47.126966  164809 cni.go:84] Creating CNI manager for ""
	I0617 12:07:47.126983  164809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:07:47.128899  164809 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:07:47.130229  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:07:47.142301  164809 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:07:47.163380  164809 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:07:47.163500  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:47.163503  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-152830 minikube.k8s.io/updated_at=2024_06_17T12_07_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6 minikube.k8s.io/name=no-preload-152830 minikube.k8s.io/primary=true
	I0617 12:07:47.375089  164809 ops.go:34] apiserver oom_adj: -16
	I0617 12:07:47.375266  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:47.875477  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:48.375626  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:48.876185  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:49.375621  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:49.875597  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:50.376188  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:50.875983  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:51.375537  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:51.876321  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:52.375920  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:52.876348  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:53.375623  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:53.875369  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:54.375747  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:54.875581  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:55.376244  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:55.875866  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:56.376285  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:56.876228  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:57.375990  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:57.875392  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:58.946943  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:07:58.947220  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:58.947233  165698 kubeadm.go:309] 
	I0617 12:07:58.947316  165698 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0617 12:07:58.947393  165698 kubeadm.go:309] 		timed out waiting for the condition
	I0617 12:07:58.947406  165698 kubeadm.go:309] 
	I0617 12:07:58.947449  165698 kubeadm.go:309] 	This error is likely caused by:
	I0617 12:07:58.947528  165698 kubeadm.go:309] 		- The kubelet is not running
	I0617 12:07:58.947690  165698 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0617 12:07:58.947699  165698 kubeadm.go:309] 
	I0617 12:07:58.947860  165698 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0617 12:07:58.947924  165698 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0617 12:07:58.947976  165698 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0617 12:07:58.947991  165698 kubeadm.go:309] 
	I0617 12:07:58.948132  165698 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0617 12:07:58.948247  165698 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0617 12:07:58.948260  165698 kubeadm.go:309] 
	I0617 12:07:58.948406  165698 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0617 12:07:58.948539  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0617 12:07:58.948639  165698 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0617 12:07:58.948740  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0617 12:07:58.948750  165698 kubeadm.go:309] 
	I0617 12:07:58.949270  165698 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:07:58.949403  165698 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0617 12:07:58.949508  165698 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0617 12:07:58.949630  165698 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0617 12:07:58.949694  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:07:59.418622  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:07:59.435367  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:07:59.449365  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:07:59.449384  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:07:59.449430  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:07:59.461411  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:07:59.461478  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:07:59.471262  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:07:59.480591  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:07:59.480640  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:07:59.490152  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:07:59.499248  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:07:59.499300  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:07:59.508891  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:07:59.518114  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:07:59.518152  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:07:59.528190  165698 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:07:59.592831  165698 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0617 12:07:59.592949  165698 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:07:59.752802  165698 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:07:59.752947  165698 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:07:59.753079  165698 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:07:59.984221  165698 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:07:58.375522  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:58.876221  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:59.375941  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:59.875924  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:08:00.063788  164809 kubeadm.go:1107] duration metric: took 12.900376954s to wait for elevateKubeSystemPrivileges
	W0617 12:08:00.063860  164809 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0617 12:08:00.063871  164809 kubeadm.go:393] duration metric: took 5m11.831587226s to StartCluster
	I0617 12:08:00.063895  164809 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:08:00.063996  164809 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:08:00.066593  164809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:08:00.066922  164809 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:08:00.068556  164809 out.go:177] * Verifying Kubernetes components...
	I0617 12:08:00.067029  164809 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:08:00.067131  164809 config.go:182] Loaded profile config "no-preload-152830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:08:00.069969  164809 addons.go:69] Setting storage-provisioner=true in profile "no-preload-152830"
	I0617 12:08:00.069983  164809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:08:00.069992  164809 addons.go:69] Setting metrics-server=true in profile "no-preload-152830"
	I0617 12:08:00.070015  164809 addons.go:234] Setting addon metrics-server=true in "no-preload-152830"
	I0617 12:08:00.070014  164809 addons.go:234] Setting addon storage-provisioner=true in "no-preload-152830"
	W0617 12:08:00.070021  164809 addons.go:243] addon metrics-server should already be in state true
	W0617 12:08:00.070024  164809 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:08:00.070055  164809 host.go:66] Checking if "no-preload-152830" exists ...
	I0617 12:08:00.070057  164809 host.go:66] Checking if "no-preload-152830" exists ...
	I0617 12:08:00.069984  164809 addons.go:69] Setting default-storageclass=true in profile "no-preload-152830"
	I0617 12:08:00.070116  164809 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-152830"
	I0617 12:08:00.070426  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.070428  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.070443  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.070451  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.070475  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.070494  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.088451  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I0617 12:08:00.089105  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.089673  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.089700  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.090074  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.090673  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.090723  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.091118  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0617 12:08:00.091150  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I0617 12:08:00.091756  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.091880  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.092306  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.092327  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.092470  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.092487  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.093006  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.093081  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.093169  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.093683  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.093722  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.096819  164809 addons.go:234] Setting addon default-storageclass=true in "no-preload-152830"
	W0617 12:08:00.096839  164809 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:08:00.096868  164809 host.go:66] Checking if "no-preload-152830" exists ...
	I0617 12:08:00.097223  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.097252  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.110063  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33623
	I0617 12:08:00.110843  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.111489  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.111509  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.112419  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.112633  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.112859  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39555
	I0617 12:08:00.113245  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.113927  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.113946  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.114470  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.114758  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:08:00.116377  164809 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:08:00.115146  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.117266  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I0617 12:08:00.117647  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:08:00.117663  164809 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:08:00.117674  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.117681  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:08:00.118504  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.119076  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.119091  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.119440  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.119755  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.121396  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.121620  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:08:00.123146  164809 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:07:59.986165  165698 out.go:204]   - Generating certificates and keys ...
	I0617 12:07:59.986270  165698 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:07:59.986391  165698 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:07:59.986522  165698 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:07:59.986606  165698 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:07:59.986717  165698 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:07:59.986795  165698 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:07:59.986887  165698 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:07:59.986972  165698 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:07:59.987081  165698 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:07:59.987191  165698 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:07:59.987250  165698 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:07:59.987331  165698 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:08:00.155668  165698 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:08:00.303780  165698 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:08:00.369907  165698 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:08:00.506550  165698 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:08:00.529943  165698 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:08:00.531684  165698 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:08:00.531756  165698 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:08:00.667972  165698 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:08:00.122003  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:08:00.122146  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:08:00.124748  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.124895  164809 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:08:00.124914  164809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:08:00.124934  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:08:00.124957  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:08:00.125142  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:08:00.125446  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:08:00.128559  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.128991  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:08:00.129011  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.129239  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:08:00.129434  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:08:00.129537  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:08:00.129640  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:08:00.142435  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39073
	I0617 12:08:00.142915  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.143550  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.143583  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.143946  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.144168  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.145972  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:08:00.146165  164809 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:08:00.146178  164809 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:08:00.146196  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:08:00.149316  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.149720  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:08:00.149743  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.149926  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:08:00.150106  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:08:00.150273  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:08:00.150434  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:08:00.294731  164809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:08:00.317727  164809 node_ready.go:35] waiting up to 6m0s for node "no-preload-152830" to be "Ready" ...
	I0617 12:08:00.346507  164809 node_ready.go:49] node "no-preload-152830" has status "Ready":"True"
	I0617 12:08:00.346533  164809 node_ready.go:38] duration metric: took 28.776898ms for node "no-preload-152830" to be "Ready" ...
	I0617 12:08:00.346544  164809 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:08:00.404097  164809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:00.412303  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:08:00.412325  164809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:08:00.415269  164809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:08:00.438024  164809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:08:00.514528  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:08:00.514561  164809 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:08:00.629109  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:08:00.629141  164809 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:08:00.677084  164809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:08:01.113979  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.114007  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.114432  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.114445  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.114507  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.114526  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.114536  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.114846  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.114866  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.117124  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.117141  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.117437  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.117457  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.117478  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.117496  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.117508  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.117821  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.117858  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.117882  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.125648  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.125668  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.125998  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.126020  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.126030  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.325217  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.325242  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.325579  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.325633  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.325669  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.325669  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.325682  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.325960  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.325977  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.326007  164809 addons.go:475] Verifying addon metrics-server=true in "no-preload-152830"
	I0617 12:08:01.326037  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.327744  164809 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0617 12:08:00.671036  165698 out.go:204]   - Booting up control plane ...
	I0617 12:08:00.671171  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:08:00.677241  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:08:00.678999  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:08:00.681119  165698 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:08:00.684535  165698 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 12:08:01.329155  164809 addons.go:510] duration metric: took 1.262127108s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0617 12:08:02.425731  164809 pod_ready.go:102] pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace has status "Ready":"False"
	I0617 12:08:03.910467  164809 pod_ready.go:92] pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.910494  164809 pod_ready.go:81] duration metric: took 3.506370946s for pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.910508  164809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vz7dg" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.916309  164809 pod_ready.go:92] pod "coredns-7db6d8ff4d-vz7dg" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.916331  164809 pod_ready.go:81] duration metric: took 5.814812ms for pod "coredns-7db6d8ff4d-vz7dg" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.916340  164809 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.920834  164809 pod_ready.go:92] pod "etcd-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.920862  164809 pod_ready.go:81] duration metric: took 4.51438ms for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.920874  164809 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.924955  164809 pod_ready.go:92] pod "kube-apiserver-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.924973  164809 pod_ready.go:81] duration metric: took 4.09301ms for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.924982  164809 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.929301  164809 pod_ready.go:92] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.929318  164809 pod_ready.go:81] duration metric: took 4.33061ms for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.929326  164809 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:04.308546  164809 pod_ready.go:92] pod "kube-scheduler-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:04.308570  164809 pod_ready.go:81] duration metric: took 379.237147ms for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:04.308578  164809 pod_ready.go:38] duration metric: took 3.962022714s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:08:04.308594  164809 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:08:04.308644  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:08:04.327383  164809 api_server.go:72] duration metric: took 4.260420928s to wait for apiserver process to appear ...
	I0617 12:08:04.327408  164809 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:08:04.327426  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:08:04.332321  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0617 12:08:04.333390  164809 api_server.go:141] control plane version: v1.30.1
	I0617 12:08:04.333412  164809 api_server.go:131] duration metric: took 5.998312ms to wait for apiserver health ...
	I0617 12:08:04.333420  164809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:08:04.512267  164809 system_pods.go:59] 9 kube-system pods found
	I0617 12:08:04.512298  164809 system_pods.go:61] "coredns-7db6d8ff4d-gjt84" [979c7339-3a4c-4bc8-8586-4d9da42339ae] Running
	I0617 12:08:04.512302  164809 system_pods.go:61] "coredns-7db6d8ff4d-vz7dg" [53c5188e-bc44-4aed-a989-ef3e2379c27b] Running
	I0617 12:08:04.512306  164809 system_pods.go:61] "etcd-no-preload-152830" [2b82d709-0776-470a-a538-f132b84be2e0] Running
	I0617 12:08:04.512310  164809 system_pods.go:61] "kube-apiserver-no-preload-152830" [e40c7c7b-b029-4f65-ac36-f4ff95eabc23] Running
	I0617 12:08:04.512313  164809 system_pods.go:61] "kube-controller-manager-no-preload-152830" [c2adec58-05a4-4993-b9a3-28f9ef519a63] Running
	I0617 12:08:04.512317  164809 system_pods.go:61] "kube-proxy-6c4hm" [a9830236-af96-437f-ad07-494b25f1a90e] Running
	I0617 12:08:04.512319  164809 system_pods.go:61] "kube-scheduler-no-preload-152830" [876671da-097b-43c1-9055-95c2ed7620aa] Running
	I0617 12:08:04.512325  164809 system_pods.go:61] "metrics-server-569cc877fc-zllzk" [e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:08:04.512329  164809 system_pods.go:61] "storage-provisioner" [b6cc7cdc-43f4-40c4-a202-5674fcdcedd0] Running
	I0617 12:08:04.512340  164809 system_pods.go:74] duration metric: took 178.914377ms to wait for pod list to return data ...
	I0617 12:08:04.512347  164809 default_sa.go:34] waiting for default service account to be created ...
	I0617 12:08:04.707834  164809 default_sa.go:45] found service account: "default"
	I0617 12:08:04.707874  164809 default_sa.go:55] duration metric: took 195.518331ms for default service account to be created ...
	I0617 12:08:04.707886  164809 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 12:08:04.916143  164809 system_pods.go:86] 9 kube-system pods found
	I0617 12:08:04.916173  164809 system_pods.go:89] "coredns-7db6d8ff4d-gjt84" [979c7339-3a4c-4bc8-8586-4d9da42339ae] Running
	I0617 12:08:04.916178  164809 system_pods.go:89] "coredns-7db6d8ff4d-vz7dg" [53c5188e-bc44-4aed-a989-ef3e2379c27b] Running
	I0617 12:08:04.916183  164809 system_pods.go:89] "etcd-no-preload-152830" [2b82d709-0776-470a-a538-f132b84be2e0] Running
	I0617 12:08:04.916187  164809 system_pods.go:89] "kube-apiserver-no-preload-152830" [e40c7c7b-b029-4f65-ac36-f4ff95eabc23] Running
	I0617 12:08:04.916191  164809 system_pods.go:89] "kube-controller-manager-no-preload-152830" [c2adec58-05a4-4993-b9a3-28f9ef519a63] Running
	I0617 12:08:04.916195  164809 system_pods.go:89] "kube-proxy-6c4hm" [a9830236-af96-437f-ad07-494b25f1a90e] Running
	I0617 12:08:04.916199  164809 system_pods.go:89] "kube-scheduler-no-preload-152830" [876671da-097b-43c1-9055-95c2ed7620aa] Running
	I0617 12:08:04.916211  164809 system_pods.go:89] "metrics-server-569cc877fc-zllzk" [e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:08:04.916219  164809 system_pods.go:89] "storage-provisioner" [b6cc7cdc-43f4-40c4-a202-5674fcdcedd0] Running
	I0617 12:08:04.916231  164809 system_pods.go:126] duration metric: took 208.336851ms to wait for k8s-apps to be running ...
	I0617 12:08:04.916245  164809 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 12:08:04.916306  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:08:04.933106  164809 system_svc.go:56] duration metric: took 16.850122ms WaitForService to wait for kubelet
	I0617 12:08:04.933135  164809 kubeadm.go:576] duration metric: took 4.866178671s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:08:04.933159  164809 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:08:05.108094  164809 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:08:05.108120  164809 node_conditions.go:123] node cpu capacity is 2
	I0617 12:08:05.108133  164809 node_conditions.go:105] duration metric: took 174.968414ms to run NodePressure ...
	I0617 12:08:05.108148  164809 start.go:240] waiting for startup goroutines ...
	I0617 12:08:05.108160  164809 start.go:245] waiting for cluster config update ...
	I0617 12:08:05.108173  164809 start.go:254] writing updated cluster config ...
	I0617 12:08:05.108496  164809 ssh_runner.go:195] Run: rm -f paused
	I0617 12:08:05.160610  164809 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 12:08:05.162777  164809 out.go:177] * Done! kubectl is now configured to use "no-preload-152830" cluster and "default" namespace by default
	I0617 12:08:40.686610  165698 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0617 12:08:40.686950  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:40.687194  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:08:45.687594  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:45.687820  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:08:55.688285  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:55.688516  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:15.689306  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:09:15.689556  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:55.688872  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:09:55.689162  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:55.689206  165698 kubeadm.go:309] 
	I0617 12:09:55.689284  165698 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0617 12:09:55.689342  165698 kubeadm.go:309] 		timed out waiting for the condition
	I0617 12:09:55.689354  165698 kubeadm.go:309] 
	I0617 12:09:55.689418  165698 kubeadm.go:309] 	This error is likely caused by:
	I0617 12:09:55.689480  165698 kubeadm.go:309] 		- The kubelet is not running
	I0617 12:09:55.689632  165698 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0617 12:09:55.689657  165698 kubeadm.go:309] 
	I0617 12:09:55.689791  165698 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0617 12:09:55.689844  165698 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0617 12:09:55.689916  165698 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0617 12:09:55.689926  165698 kubeadm.go:309] 
	I0617 12:09:55.690059  165698 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0617 12:09:55.690140  165698 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0617 12:09:55.690159  165698 kubeadm.go:309] 
	I0617 12:09:55.690258  165698 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0617 12:09:55.690343  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0617 12:09:55.690434  165698 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0617 12:09:55.690530  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0617 12:09:55.690546  165698 kubeadm.go:309] 
	I0617 12:09:55.691495  165698 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:09:55.691595  165698 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0617 12:09:55.691708  165698 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0617 12:09:55.691787  165698 kubeadm.go:393] duration metric: took 7m57.151326537s to StartCluster
	I0617 12:09:55.691844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:09:55.691904  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:09:55.746514  165698 cri.go:89] found id: ""
	I0617 12:09:55.746550  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.746563  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:09:55.746572  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:09:55.746636  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:09:55.789045  165698 cri.go:89] found id: ""
	I0617 12:09:55.789083  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.789095  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:09:55.789103  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:09:55.789169  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:09:55.829492  165698 cri.go:89] found id: ""
	I0617 12:09:55.829533  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.829542  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:09:55.829547  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:09:55.829614  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:09:55.865213  165698 cri.go:89] found id: ""
	I0617 12:09:55.865246  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.865262  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:09:55.865267  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:09:55.865318  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:09:55.904067  165698 cri.go:89] found id: ""
	I0617 12:09:55.904102  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.904113  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:09:55.904122  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:09:55.904187  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:09:55.938441  165698 cri.go:89] found id: ""
	I0617 12:09:55.938471  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.938478  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:09:55.938487  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:09:55.938538  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:09:55.975669  165698 cri.go:89] found id: ""
	I0617 12:09:55.975710  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.975723  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:09:55.975731  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:09:55.975804  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:09:56.015794  165698 cri.go:89] found id: ""
	I0617 12:09:56.015826  165698 logs.go:276] 0 containers: []
	W0617 12:09:56.015837  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:09:56.015851  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:09:56.015868  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:09:56.095533  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:09:56.095557  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:09:56.095573  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:09:56.220817  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:09:56.220857  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:09:56.261470  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:09:56.261507  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:09:56.325626  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:09:56.325673  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0617 12:09:56.345438  165698 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0617 12:09:56.345491  165698 out.go:239] * 
	W0617 12:09:56.345606  165698 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0617 12:09:56.345635  165698 out.go:239] * 
	W0617 12:09:56.346583  165698 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 12:09:56.349928  165698 out.go:177] 
	W0617 12:09:56.351067  165698 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0617 12:09:56.351127  165698 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0617 12:09:56.351157  165698 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0617 12:09:56.352487  165698 out.go:177] 
	
	
	==> CRI-O <==
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.184908526Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626627184885919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc3abf49-af65-4832-8366-3581772ddce9 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.185741054Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb7a43b3-ebd1-4b06-a99d-0b2f27b790e4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.185795424Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb7a43b3-ebd1-4b06-a99d-0b2f27b790e4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.186009581Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2eb6e871393848fac8fd1b5630ae133dfbd8784261c95335263a2a2e9aeb31ed,PodSandboxId:0e2620a06aafec68ec8cbf6b343abffa70fb9085f375b6885f133113b68cec97,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718626082573824689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gjt84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979c7339-3a4c-4bc8-8586-4d9da42339ae,},Annotations:map[string]string{io.kubernetes.container.hash: 17100608,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09220cf548ec25e3fa38ba0ac745612184325366eca32d4f17de4c2baa2094ee,PodSandboxId:5d8599f2018c313440ec042d0d1ddc63aa32ba7c86cfee77089ff66c713cd16e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718626082513972888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vz7dg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 53c5188e-bc44-4aed-a989-ef3e2379c27b,},Annotations:map[string]string{io.kubernetes.container.hash: ec0598bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bded990e0ce1c6be7f1b1465276f4a8754154adf288c943ec48740d65f95d32,PodSandboxId:701951a57908ba8b3906dfde5778973e38e10282bc7f0d512c66261129dc2ee4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1718626081861791047,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6cc7cdc-43f4-40c4-a202-5674fcdcedd0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fca2510,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d420ac4be70e18bcc188db3f69ef03797656c819429b0bc4fa68a2cf25cba17,PodSandboxId:c97bc08c0fbb373a7790949cab16859861f29efba5664702ae197c1fd54eeed3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718626081807370163,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6c4hm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9830236-af96-437f-ad07-494b25f1a90e,},Annotations:map[string]string{io.kubernetes.container.hash: 15e64a6c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b82613491050410755d245f7ea0fd61cc70f9f438300c01e6a12f663ad434eee,PodSandboxId:75a277d3438b2bc2eda6aceeb51ff775534afbfc7373455d72f0a6c72d12a581,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718626061439689334,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ee54b88856008800f3a5a411b09cf4,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5833a84b69a3ed88b016a93eab2b3859871cb27f7331ae2296a7db6fd65e96f7,PodSandboxId:887af8887922b1719e31d347995aef73bdf1e04b1fbf76b5face2c4b630c5bed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718626061432583941,Label
s:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7355a3a6d39f3ad62baaaf745eac603,},Annotations:map[string]string{io.kubernetes.container.hash: 99151ff0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf31b741f07971feda2bdee30e1b474c535befaa7310f7e6f31405b62526b2af,PodSandboxId:85d99e7a3ceeb18acc168cc80fcda42788569272ad9d1c7209bbad3774ec5260,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718626061367595982,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dde261e0cfb643c1c4d3ca5c2bc383c1,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de4bddebe0087f3f022dfeafa27d6746d6447687007d3334d4251031b8f6aabc,PodSandboxId:a4cd7f7d3051c333c9710faa8ea0b62dd4aff09c8e24d86f314398d5f79c06c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718626061280190306,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baa2096079eb9eb9c1a91e2265966e2,},Annotations:map[string]string{io.kubernetes.container.hash: 507fdc08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb7a43b3-ebd1-4b06-a99d-0b2f27b790e4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.226668196Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28cfc6fb-9fe3-4684-9dcd-147764031c9d name=/runtime.v1.RuntimeService/Version
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.226763386Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28cfc6fb-9fe3-4684-9dcd-147764031c9d name=/runtime.v1.RuntimeService/Version
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.228066776Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af832ff7-c499-41bd-b419-afc71220330c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.228863234Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626627228835663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af832ff7-c499-41bd-b419-afc71220330c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.229728908Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0cbeb55-0576-4209-bf1e-a8110ef35465 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.229892241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0cbeb55-0576-4209-bf1e-a8110ef35465 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.230117723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2eb6e871393848fac8fd1b5630ae133dfbd8784261c95335263a2a2e9aeb31ed,PodSandboxId:0e2620a06aafec68ec8cbf6b343abffa70fb9085f375b6885f133113b68cec97,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718626082573824689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gjt84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979c7339-3a4c-4bc8-8586-4d9da42339ae,},Annotations:map[string]string{io.kubernetes.container.hash: 17100608,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09220cf548ec25e3fa38ba0ac745612184325366eca32d4f17de4c2baa2094ee,PodSandboxId:5d8599f2018c313440ec042d0d1ddc63aa32ba7c86cfee77089ff66c713cd16e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718626082513972888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vz7dg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 53c5188e-bc44-4aed-a989-ef3e2379c27b,},Annotations:map[string]string{io.kubernetes.container.hash: ec0598bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bded990e0ce1c6be7f1b1465276f4a8754154adf288c943ec48740d65f95d32,PodSandboxId:701951a57908ba8b3906dfde5778973e38e10282bc7f0d512c66261129dc2ee4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1718626081861791047,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6cc7cdc-43f4-40c4-a202-5674fcdcedd0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fca2510,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d420ac4be70e18bcc188db3f69ef03797656c819429b0bc4fa68a2cf25cba17,PodSandboxId:c97bc08c0fbb373a7790949cab16859861f29efba5664702ae197c1fd54eeed3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718626081807370163,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6c4hm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9830236-af96-437f-ad07-494b25f1a90e,},Annotations:map[string]string{io.kubernetes.container.hash: 15e64a6c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b82613491050410755d245f7ea0fd61cc70f9f438300c01e6a12f663ad434eee,PodSandboxId:75a277d3438b2bc2eda6aceeb51ff775534afbfc7373455d72f0a6c72d12a581,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718626061439689334,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ee54b88856008800f3a5a411b09cf4,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5833a84b69a3ed88b016a93eab2b3859871cb27f7331ae2296a7db6fd65e96f7,PodSandboxId:887af8887922b1719e31d347995aef73bdf1e04b1fbf76b5face2c4b630c5bed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718626061432583941,Label
s:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7355a3a6d39f3ad62baaaf745eac603,},Annotations:map[string]string{io.kubernetes.container.hash: 99151ff0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf31b741f07971feda2bdee30e1b474c535befaa7310f7e6f31405b62526b2af,PodSandboxId:85d99e7a3ceeb18acc168cc80fcda42788569272ad9d1c7209bbad3774ec5260,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718626061367595982,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dde261e0cfb643c1c4d3ca5c2bc383c1,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de4bddebe0087f3f022dfeafa27d6746d6447687007d3334d4251031b8f6aabc,PodSandboxId:a4cd7f7d3051c333c9710faa8ea0b62dd4aff09c8e24d86f314398d5f79c06c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718626061280190306,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baa2096079eb9eb9c1a91e2265966e2,},Annotations:map[string]string{io.kubernetes.container.hash: 507fdc08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0cbeb55-0576-4209-bf1e-a8110ef35465 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.272430513Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f2e2d78-dd86-4ab9-92b5-8899bcc8ceae name=/runtime.v1.RuntimeService/Version
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.272621408Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f2e2d78-dd86-4ab9-92b5-8899bcc8ceae name=/runtime.v1.RuntimeService/Version
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.274493908Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=161086dd-8652-4644-a73c-90214b8886e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.274928386Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626627274906606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=161086dd-8652-4644-a73c-90214b8886e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.275672113Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe52934d-f871-472e-b77f-626502b93547 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.275723321Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe52934d-f871-472e-b77f-626502b93547 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.275890631Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2eb6e871393848fac8fd1b5630ae133dfbd8784261c95335263a2a2e9aeb31ed,PodSandboxId:0e2620a06aafec68ec8cbf6b343abffa70fb9085f375b6885f133113b68cec97,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718626082573824689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gjt84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979c7339-3a4c-4bc8-8586-4d9da42339ae,},Annotations:map[string]string{io.kubernetes.container.hash: 17100608,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09220cf548ec25e3fa38ba0ac745612184325366eca32d4f17de4c2baa2094ee,PodSandboxId:5d8599f2018c313440ec042d0d1ddc63aa32ba7c86cfee77089ff66c713cd16e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718626082513972888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vz7dg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 53c5188e-bc44-4aed-a989-ef3e2379c27b,},Annotations:map[string]string{io.kubernetes.container.hash: ec0598bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bded990e0ce1c6be7f1b1465276f4a8754154adf288c943ec48740d65f95d32,PodSandboxId:701951a57908ba8b3906dfde5778973e38e10282bc7f0d512c66261129dc2ee4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1718626081861791047,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6cc7cdc-43f4-40c4-a202-5674fcdcedd0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fca2510,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d420ac4be70e18bcc188db3f69ef03797656c819429b0bc4fa68a2cf25cba17,PodSandboxId:c97bc08c0fbb373a7790949cab16859861f29efba5664702ae197c1fd54eeed3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718626081807370163,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6c4hm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9830236-af96-437f-ad07-494b25f1a90e,},Annotations:map[string]string{io.kubernetes.container.hash: 15e64a6c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b82613491050410755d245f7ea0fd61cc70f9f438300c01e6a12f663ad434eee,PodSandboxId:75a277d3438b2bc2eda6aceeb51ff775534afbfc7373455d72f0a6c72d12a581,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718626061439689334,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ee54b88856008800f3a5a411b09cf4,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5833a84b69a3ed88b016a93eab2b3859871cb27f7331ae2296a7db6fd65e96f7,PodSandboxId:887af8887922b1719e31d347995aef73bdf1e04b1fbf76b5face2c4b630c5bed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718626061432583941,Label
s:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7355a3a6d39f3ad62baaaf745eac603,},Annotations:map[string]string{io.kubernetes.container.hash: 99151ff0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf31b741f07971feda2bdee30e1b474c535befaa7310f7e6f31405b62526b2af,PodSandboxId:85d99e7a3ceeb18acc168cc80fcda42788569272ad9d1c7209bbad3774ec5260,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718626061367595982,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dde261e0cfb643c1c4d3ca5c2bc383c1,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de4bddebe0087f3f022dfeafa27d6746d6447687007d3334d4251031b8f6aabc,PodSandboxId:a4cd7f7d3051c333c9710faa8ea0b62dd4aff09c8e24d86f314398d5f79c06c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718626061280190306,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baa2096079eb9eb9c1a91e2265966e2,},Annotations:map[string]string{io.kubernetes.container.hash: 507fdc08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe52934d-f871-472e-b77f-626502b93547 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.313801879Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=defd0aea-6cb4-4425-9dd6-e8918d66b5a9 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.313877009Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=defd0aea-6cb4-4425-9dd6-e8918d66b5a9 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.315153858Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b950cd4-efd9-4909-91cd-005ee1021e50 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.315754847Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626627315711539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b950cd4-efd9-4909-91cd-005ee1021e50 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.316226253Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94569cde-859a-430c-b064-eb525fea9bfe name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.316274408Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94569cde-859a-430c-b064-eb525fea9bfe name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:17:07 no-preload-152830 crio[737]: time="2024-06-17 12:17:07.316452055Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2eb6e871393848fac8fd1b5630ae133dfbd8784261c95335263a2a2e9aeb31ed,PodSandboxId:0e2620a06aafec68ec8cbf6b343abffa70fb9085f375b6885f133113b68cec97,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718626082573824689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gjt84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979c7339-3a4c-4bc8-8586-4d9da42339ae,},Annotations:map[string]string{io.kubernetes.container.hash: 17100608,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09220cf548ec25e3fa38ba0ac745612184325366eca32d4f17de4c2baa2094ee,PodSandboxId:5d8599f2018c313440ec042d0d1ddc63aa32ba7c86cfee77089ff66c713cd16e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718626082513972888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vz7dg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 53c5188e-bc44-4aed-a989-ef3e2379c27b,},Annotations:map[string]string{io.kubernetes.container.hash: ec0598bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bded990e0ce1c6be7f1b1465276f4a8754154adf288c943ec48740d65f95d32,PodSandboxId:701951a57908ba8b3906dfde5778973e38e10282bc7f0d512c66261129dc2ee4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1718626081861791047,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6cc7cdc-43f4-40c4-a202-5674fcdcedd0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fca2510,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d420ac4be70e18bcc188db3f69ef03797656c819429b0bc4fa68a2cf25cba17,PodSandboxId:c97bc08c0fbb373a7790949cab16859861f29efba5664702ae197c1fd54eeed3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718626081807370163,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6c4hm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9830236-af96-437f-ad07-494b25f1a90e,},Annotations:map[string]string{io.kubernetes.container.hash: 15e64a6c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b82613491050410755d245f7ea0fd61cc70f9f438300c01e6a12f663ad434eee,PodSandboxId:75a277d3438b2bc2eda6aceeb51ff775534afbfc7373455d72f0a6c72d12a581,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718626061439689334,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ee54b88856008800f3a5a411b09cf4,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5833a84b69a3ed88b016a93eab2b3859871cb27f7331ae2296a7db6fd65e96f7,PodSandboxId:887af8887922b1719e31d347995aef73bdf1e04b1fbf76b5face2c4b630c5bed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718626061432583941,Label
s:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7355a3a6d39f3ad62baaaf745eac603,},Annotations:map[string]string{io.kubernetes.container.hash: 99151ff0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf31b741f07971feda2bdee30e1b474c535befaa7310f7e6f31405b62526b2af,PodSandboxId:85d99e7a3ceeb18acc168cc80fcda42788569272ad9d1c7209bbad3774ec5260,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718626061367595982,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dde261e0cfb643c1c4d3ca5c2bc383c1,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de4bddebe0087f3f022dfeafa27d6746d6447687007d3334d4251031b8f6aabc,PodSandboxId:a4cd7f7d3051c333c9710faa8ea0b62dd4aff09c8e24d86f314398d5f79c06c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718626061280190306,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baa2096079eb9eb9c1a91e2265966e2,},Annotations:map[string]string{io.kubernetes.container.hash: 507fdc08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94569cde-859a-430c-b064-eb525fea9bfe name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2eb6e87139384       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   0e2620a06aafe       coredns-7db6d8ff4d-gjt84
	09220cf548ec2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   5d8599f2018c3       coredns-7db6d8ff4d-vz7dg
	9bded990e0ce1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   701951a57908b       storage-provisioner
	4d420ac4be70e       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   9 minutes ago       Running             kube-proxy                0                   c97bc08c0fbb3       kube-proxy-6c4hm
	b826134910504       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   9 minutes ago       Running             kube-controller-manager   2                   75a277d3438b2       kube-controller-manager-no-preload-152830
	5833a84b69a3e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   887af8887922b       etcd-no-preload-152830
	bf31b741f0797       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   9 minutes ago       Running             kube-scheduler            2                   85d99e7a3ceeb       kube-scheduler-no-preload-152830
	de4bddebe0087       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   9 minutes ago       Running             kube-apiserver            2                   a4cd7f7d3051c       kube-apiserver-no-preload-152830
	
	
	==> coredns [09220cf548ec25e3fa38ba0ac745612184325366eca32d4f17de4c2baa2094ee] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [2eb6e871393848fac8fd1b5630ae133dfbd8784261c95335263a2a2e9aeb31ed] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-152830
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-152830
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=no-preload-152830
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T12_07_47_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 12:07:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-152830
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 12:16:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 12:13:12 +0000   Mon, 17 Jun 2024 12:07:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 12:13:12 +0000   Mon, 17 Jun 2024 12:07:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 12:13:12 +0000   Mon, 17 Jun 2024 12:07:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 12:13:12 +0000   Mon, 17 Jun 2024 12:07:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.173
	  Hostname:    no-preload-152830
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d73a39d81ccb4dd998aa6fdf08c4cb97
	  System UUID:                d73a39d8-1ccb-4dd9-98aa-6fdf08c4cb97
	  Boot ID:                    6c7e6252-8e65-4558-aaad-d3923e6b9c9c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-gjt84                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-vz7dg                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-no-preload-152830                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-no-preload-152830             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-no-preload-152830    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-6c4hm                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-no-preload-152830             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-569cc877fc-zllzk              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  Starting                 9m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node no-preload-152830 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node no-preload-152830 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m27s)  kubelet          Node no-preload-152830 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node no-preload-152830 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node no-preload-152830 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node no-preload-152830 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s                   node-controller  Node no-preload-152830 event: Registered Node no-preload-152830 in Controller
	
	
	==> dmesg <==
	[  +0.052783] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044949] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.828945] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.586167] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.669274] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.355585] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.060627] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070294] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.174710] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.133184] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +0.291940] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[ +16.459418] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	[  +0.065629] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.831156] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +4.644779] kauditd_printk_skb: 100 callbacks suppressed
	[Jun17 12:03] kauditd_printk_skb: 89 callbacks suppressed
	[Jun17 12:07] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.150396] systemd-fstab-generator[4053]: Ignoring "noauto" option for root device
	[  +4.647290] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.412647] systemd-fstab-generator[4375]: Ignoring "noauto" option for root device
	[ +13.980166] systemd-fstab-generator[4575]: Ignoring "noauto" option for root device
	[  +0.113914] kauditd_printk_skb: 14 callbacks suppressed
	[Jun17 12:09] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [5833a84b69a3ed88b016a93eab2b3859871cb27f7331ae2296a7db6fd65e96f7] <==
	{"level":"info","ts":"2024-06-17T12:07:41.732462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e switched to configuration voters=(15795650823209426446)"}
	{"level":"info","ts":"2024-06-17T12:07:41.732711Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a25ac6d8ed10a2a9","local-member-id":"db356cbc19811e0e","added-peer-id":"db356cbc19811e0e","added-peer-peer-urls":["https://192.168.39.173:2380"]}
	{"level":"info","ts":"2024-06-17T12:07:41.749944Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-17T12:07:41.750936Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"db356cbc19811e0e","initial-advertise-peer-urls":["https://192.168.39.173:2380"],"listen-peer-urls":["https://192.168.39.173:2380"],"advertise-client-urls":["https://192.168.39.173:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.173:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-17T12:07:41.755742Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-17T12:07:41.75803Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.173:2380"}
	{"level":"info","ts":"2024-06-17T12:07:41.765954Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.173:2380"}
	{"level":"info","ts":"2024-06-17T12:07:41.898605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-17T12:07:41.89875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-17T12:07:41.898799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e received MsgPreVoteResp from db356cbc19811e0e at term 1"}
	{"level":"info","ts":"2024-06-17T12:07:41.89883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e became candidate at term 2"}
	{"level":"info","ts":"2024-06-17T12:07:41.898855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e received MsgVoteResp from db356cbc19811e0e at term 2"}
	{"level":"info","ts":"2024-06-17T12:07:41.898882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e became leader at term 2"}
	{"level":"info","ts":"2024-06-17T12:07:41.898907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: db356cbc19811e0e elected leader db356cbc19811e0e at term 2"}
	{"level":"info","ts":"2024-06-17T12:07:41.903792Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"db356cbc19811e0e","local-member-attributes":"{Name:no-preload-152830 ClientURLs:[https://192.168.39.173:2379]}","request-path":"/0/members/db356cbc19811e0e/attributes","cluster-id":"a25ac6d8ed10a2a9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-17T12:07:41.903878Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T12:07:41.904275Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T12:07:41.904592Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T12:07:41.911906Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a25ac6d8ed10a2a9","local-member-id":"db356cbc19811e0e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T12:07:41.919133Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T12:07:41.920231Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T12:07:41.912149Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.173:2379"}
	{"level":"info","ts":"2024-06-17T12:07:41.933574Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-17T12:07:41.947408Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-17T12:07:41.950568Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 12:17:07 up 14 min,  0 users,  load average: 0.15, 0.27, 0.21
	Linux no-preload-152830 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [de4bddebe0087f3f022dfeafa27d6746d6447687007d3334d4251031b8f6aabc] <==
	I0617 12:11:02.141932       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:12:43.690980       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:12:43.691098       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0617 12:12:44.691628       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:12:44.691682       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0617 12:12:44.691690       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:12:44.691803       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:12:44.691902       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:12:44.692828       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:13:44.692126       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:13:44.692209       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0617 12:13:44.692218       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:13:44.693431       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:13:44.693492       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:13:44.693542       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:15:44.692595       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:15:44.692682       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0617 12:15:44.692694       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:15:44.693735       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:15:44.693875       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:15:44.693903       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b82613491050410755d245f7ea0fd61cc70f9f438300c01e6a12f663ad434eee] <==
	I0617 12:11:30.583311       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:12:00.154306       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:12:00.596944       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:12:30.160017       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:12:30.605720       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:13:00.165706       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:13:00.617251       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:13:30.170705       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:13:30.629877       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0617 12:13:56.538678       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="377.82µs"
	E0617 12:14:00.178580       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:14:00.638153       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0617 12:14:11.536335       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="148.456µs"
	E0617 12:14:30.184347       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:14:30.645759       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:15:00.190346       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:15:00.654938       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:15:30.195919       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:15:30.670369       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:16:00.201827       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:16:00.685307       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:16:30.206277       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:16:30.692917       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:17:00.212058       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:17:00.700770       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4d420ac4be70e18bcc188db3f69ef03797656c819429b0bc4fa68a2cf25cba17] <==
	I0617 12:08:02.089979       1 server_linux.go:69] "Using iptables proxy"
	I0617 12:08:02.124894       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.173"]
	I0617 12:08:02.239659       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0617 12:08:02.239709       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0617 12:08:02.239725       1 server_linux.go:165] "Using iptables Proxier"
	I0617 12:08:02.246452       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 12:08:02.246951       1 server.go:872] "Version info" version="v1.30.1"
	I0617 12:08:02.246968       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 12:08:02.248449       1 config.go:192] "Starting service config controller"
	I0617 12:08:02.248470       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 12:08:02.248589       1 config.go:101] "Starting endpoint slice config controller"
	I0617 12:08:02.248601       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0617 12:08:02.249321       1 config.go:319] "Starting node config controller"
	I0617 12:08:02.249327       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 12:08:02.349335       1 shared_informer.go:320] Caches are synced for service config
	I0617 12:08:02.349433       1 shared_informer.go:320] Caches are synced for node config
	I0617 12:08:02.349458       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [bf31b741f07971feda2bdee30e1b474c535befaa7310f7e6f31405b62526b2af] <==
	W0617 12:07:44.532314       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0617 12:07:44.532364       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 12:07:44.565677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0617 12:07:44.565721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0617 12:07:44.620468       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0617 12:07:44.620495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0617 12:07:44.658021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0617 12:07:44.658132       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0617 12:07:44.689495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0617 12:07:44.689649       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0617 12:07:44.743239       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0617 12:07:44.743311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0617 12:07:44.745941       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0617 12:07:44.745962       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0617 12:07:44.799121       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0617 12:07:44.799254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0617 12:07:44.802706       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0617 12:07:44.802898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0617 12:07:44.844998       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0617 12:07:44.845169       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0617 12:07:44.952326       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0617 12:07:44.953114       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0617 12:07:45.046099       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0617 12:07:45.046250       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0617 12:07:47.114628       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 17 12:14:46 no-preload-152830 kubelet[4382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 12:14:46 no-preload-152830 kubelet[4382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 12:14:46 no-preload-152830 kubelet[4382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 12:14:46 no-preload-152830 kubelet[4382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 12:14:54 no-preload-152830 kubelet[4382]: E0617 12:14:54.520921    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:15:05 no-preload-152830 kubelet[4382]: E0617 12:15:05.519697    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:15:16 no-preload-152830 kubelet[4382]: E0617 12:15:16.520598    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:15:28 no-preload-152830 kubelet[4382]: E0617 12:15:28.523110    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:15:39 no-preload-152830 kubelet[4382]: E0617 12:15:39.518997    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:15:46 no-preload-152830 kubelet[4382]: E0617 12:15:46.544610    4382 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 12:15:46 no-preload-152830 kubelet[4382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 12:15:46 no-preload-152830 kubelet[4382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 12:15:46 no-preload-152830 kubelet[4382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 12:15:46 no-preload-152830 kubelet[4382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 12:15:54 no-preload-152830 kubelet[4382]: E0617 12:15:54.520780    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:16:06 no-preload-152830 kubelet[4382]: E0617 12:16:06.519005    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:16:21 no-preload-152830 kubelet[4382]: E0617 12:16:21.518659    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:16:34 no-preload-152830 kubelet[4382]: E0617 12:16:34.520431    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:16:45 no-preload-152830 kubelet[4382]: E0617 12:16:45.519236    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:16:46 no-preload-152830 kubelet[4382]: E0617 12:16:46.542841    4382 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 12:16:46 no-preload-152830 kubelet[4382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 12:16:46 no-preload-152830 kubelet[4382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 12:16:46 no-preload-152830 kubelet[4382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 12:16:46 no-preload-152830 kubelet[4382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 12:16:57 no-preload-152830 kubelet[4382]: E0617 12:16:57.519354    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	
	
	==> storage-provisioner [9bded990e0ce1c6be7f1b1465276f4a8754154adf288c943ec48740d65f95d32] <==
	I0617 12:08:02.054480       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0617 12:08:02.071455       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0617 12:08:02.071556       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0617 12:08:02.091682       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0617 12:08:02.091861       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-152830_6e663f14-4907-4466-bca1-b193c05941a1!
	I0617 12:08:02.092839       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8e075b74-d8e9-4bee-bf4a-cef017cda12a", APIVersion:"v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-152830_6e663f14-4907-4466-bca1-b193c05941a1 became leader
	I0617 12:08:02.193941       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-152830_6e663f14-4907-4466-bca1-b193c05941a1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-152830 -n no-preload-152830
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-152830 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-zllzk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-152830 describe pod metrics-server-569cc877fc-zllzk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-152830 describe pod metrics-server-569cc877fc-zllzk: exit status 1 (63.759089ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-zllzk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-152830 describe pod metrics-server-569cc877fc-zllzk: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
E0617 12:10:20.449324  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
E0617 12:11:51.169830  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
E0617 12:13:57.398139  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
E0617 12:16:51.169952  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
E0617 12:18:57.397897  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-003661 -n old-k8s-version-003661
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-003661 -n old-k8s-version-003661: exit status 2 (227.628178ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-003661" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003661 -n old-k8s-version-003661
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003661 -n old-k8s-version-003661: exit status 2 (214.65282ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-003661 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-003661 logs -n 25: (1.537637051s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-514753                              | cert-expiration-514753       | jenkins | v1.33.1 | 17 Jun 24 11:52 UTC | 17 Jun 24 11:52 UTC |
	| start   | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:52 UTC | 17 Jun 24 11:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-152830             | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-152830                                   | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-136195            | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:55 UTC |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	| delete  | -p                                                     | disable-driver-mounts-960277 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | disable-driver-mounts-960277                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:56 UTC |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-152830                  | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-152830                                   | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC | 17 Jun 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-136195                 | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-003661        | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC | 17 Jun 24 12:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-991309  | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:57 UTC | 17 Jun 24 11:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:57 UTC |                     |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-003661                              | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC | 17 Jun 24 11:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-003661             | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC | 17 Jun 24 11:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-003661                              | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-991309       | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:59 UTC | 17 Jun 24 12:06 UTC |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 11:59:37
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 11:59:37.428028  166103 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:59:37.428266  166103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:59:37.428274  166103 out.go:304] Setting ErrFile to fd 2...
	I0617 11:59:37.428279  166103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:59:37.428472  166103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:59:37.429026  166103 out.go:298] Setting JSON to false
	I0617 11:59:37.429968  166103 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6124,"bootTime":1718619453,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:59:37.430026  166103 start.go:139] virtualization: kvm guest
	I0617 11:59:37.432171  166103 out.go:177] * [default-k8s-diff-port-991309] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:59:37.433521  166103 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:59:37.433548  166103 notify.go:220] Checking for updates...
	I0617 11:59:37.434850  166103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:59:37.436099  166103 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:59:37.437362  166103 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:59:37.438535  166103 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:59:37.439644  166103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:59:37.441113  166103 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:59:37.441563  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:59:37.441645  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:59:37.456875  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45565
	I0617 11:59:37.457306  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:59:37.457839  166103 main.go:141] libmachine: Using API Version  1
	I0617 11:59:37.457861  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:59:37.458188  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:59:37.458381  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 11:59:37.458626  166103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:59:37.458927  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:59:37.458971  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:59:37.474024  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45165
	I0617 11:59:37.474411  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:59:37.474873  166103 main.go:141] libmachine: Using API Version  1
	I0617 11:59:37.474899  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:59:37.475199  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:59:37.475383  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 11:59:37.507955  166103 out.go:177] * Using the kvm2 driver based on existing profile
	I0617 11:59:37.509134  166103 start.go:297] selected driver: kvm2
	I0617 11:59:37.509148  166103 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:59:37.509249  166103 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:59:37.509927  166103 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:59:37.510004  166103 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 11:59:37.525340  166103 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 11:59:37.525701  166103 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:59:37.525761  166103 cni.go:84] Creating CNI manager for ""
	I0617 11:59:37.525779  166103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:59:37.525812  166103 start.go:340] cluster config:
	{Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:59:37.525910  166103 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:59:37.527756  166103 out.go:177] * Starting "default-k8s-diff-port-991309" primary control-plane node in "default-k8s-diff-port-991309" cluster
	I0617 11:59:36.391800  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:37.529104  166103 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:59:37.529159  166103 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 11:59:37.529171  166103 cache.go:56] Caching tarball of preloaded images
	I0617 11:59:37.529246  166103 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:59:37.529256  166103 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 11:59:37.529368  166103 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/config.json ...
	I0617 11:59:37.529565  166103 start.go:360] acquireMachinesLock for default-k8s-diff-port-991309: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:59:42.471684  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:45.543735  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:51.623725  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:54.695811  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:00.775775  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:03.847736  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:09.927768  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:12.999728  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:19.079809  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:22.151737  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:28.231763  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:31.303775  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:37.383783  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:40.455809  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:46.535757  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:49.607769  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:55.687772  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:58.759722  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:01:04.839736  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:01:07.911780  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:01:10.916735  165060 start.go:364] duration metric: took 4m27.471308215s to acquireMachinesLock for "embed-certs-136195"
	I0617 12:01:10.916814  165060 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:10.916827  165060 fix.go:54] fixHost starting: 
	I0617 12:01:10.917166  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:10.917203  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:10.932217  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43235
	I0617 12:01:10.932742  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:10.933241  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:10.933261  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:10.933561  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:10.933766  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:10.933939  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:10.935452  165060 fix.go:112] recreateIfNeeded on embed-certs-136195: state=Stopped err=<nil>
	I0617 12:01:10.935660  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	W0617 12:01:10.935831  165060 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:10.937510  165060 out.go:177] * Restarting existing kvm2 VM for "embed-certs-136195" ...
	I0617 12:01:10.938708  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Start
	I0617 12:01:10.938873  165060 main.go:141] libmachine: (embed-certs-136195) Ensuring networks are active...
	I0617 12:01:10.939602  165060 main.go:141] libmachine: (embed-certs-136195) Ensuring network default is active
	I0617 12:01:10.939896  165060 main.go:141] libmachine: (embed-certs-136195) Ensuring network mk-embed-certs-136195 is active
	I0617 12:01:10.940260  165060 main.go:141] libmachine: (embed-certs-136195) Getting domain xml...
	I0617 12:01:10.940881  165060 main.go:141] libmachine: (embed-certs-136195) Creating domain...
	I0617 12:01:12.136267  165060 main.go:141] libmachine: (embed-certs-136195) Waiting to get IP...
	I0617 12:01:12.137303  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:12.137692  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:12.137777  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:12.137684  166451 retry.go:31] will retry after 261.567272ms: waiting for machine to come up
	I0617 12:01:12.401390  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:12.401845  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:12.401873  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:12.401816  166451 retry.go:31] will retry after 332.256849ms: waiting for machine to come up
	I0617 12:01:12.735421  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:12.735842  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:12.735872  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:12.735783  166451 retry.go:31] will retry after 457.313241ms: waiting for machine to come up
	I0617 12:01:13.194621  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:13.195073  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:13.195091  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:13.195036  166451 retry.go:31] will retry after 539.191177ms: waiting for machine to come up
	I0617 12:01:10.914315  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:10.914353  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:01:10.914690  164809 buildroot.go:166] provisioning hostname "no-preload-152830"
	I0617 12:01:10.914716  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:01:10.914905  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:01:10.916557  164809 machine.go:97] duration metric: took 4m37.418351206s to provisionDockerMachine
	I0617 12:01:10.916625  164809 fix.go:56] duration metric: took 4m37.438694299s for fixHost
	I0617 12:01:10.916634  164809 start.go:83] releasing machines lock for "no-preload-152830", held for 4m37.438726092s
	W0617 12:01:10.916653  164809 start.go:713] error starting host: provision: host is not running
	W0617 12:01:10.916750  164809 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0617 12:01:10.916763  164809 start.go:728] Will try again in 5 seconds ...
	I0617 12:01:13.735708  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:13.736155  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:13.736184  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:13.736096  166451 retry.go:31] will retry after 754.965394ms: waiting for machine to come up
	I0617 12:01:14.493211  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:14.493598  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:14.493628  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:14.493544  166451 retry.go:31] will retry after 786.125188ms: waiting for machine to come up
	I0617 12:01:15.281505  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:15.281975  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:15.282008  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:15.281939  166451 retry.go:31] will retry after 1.091514617s: waiting for machine to come up
	I0617 12:01:16.375391  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:16.375904  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:16.375935  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:16.375820  166451 retry.go:31] will retry after 1.34601641s: waiting for machine to come up
	I0617 12:01:17.724108  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:17.724453  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:17.724477  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:17.724418  166451 retry.go:31] will retry after 1.337616605s: waiting for machine to come up
	I0617 12:01:15.918256  164809 start.go:360] acquireMachinesLock for no-preload-152830: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 12:01:19.063677  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:19.064210  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:19.064243  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:19.064144  166451 retry.go:31] will retry after 1.914267639s: waiting for machine to come up
	I0617 12:01:20.979644  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:20.980124  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:20.980150  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:20.980072  166451 retry.go:31] will retry after 2.343856865s: waiting for machine to come up
	I0617 12:01:23.326506  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:23.326878  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:23.326922  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:23.326861  166451 retry.go:31] will retry after 2.450231017s: waiting for machine to come up
	I0617 12:01:25.780501  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:25.780886  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:25.780913  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:25.780825  166451 retry.go:31] will retry after 3.591107926s: waiting for machine to come up
	I0617 12:01:30.728529  165698 start.go:364] duration metric: took 3m12.647041864s to acquireMachinesLock for "old-k8s-version-003661"
	I0617 12:01:30.728602  165698 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:30.728613  165698 fix.go:54] fixHost starting: 
	I0617 12:01:30.729036  165698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:30.729090  165698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:30.746528  165698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0617 12:01:30.746982  165698 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:30.747493  165698 main.go:141] libmachine: Using API Version  1
	I0617 12:01:30.747516  165698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:30.747847  165698 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:30.748060  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:30.748186  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetState
	I0617 12:01:30.750035  165698 fix.go:112] recreateIfNeeded on old-k8s-version-003661: state=Stopped err=<nil>
	I0617 12:01:30.750072  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	W0617 12:01:30.750206  165698 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:30.752196  165698 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-003661" ...
	I0617 12:01:29.375875  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.376372  165060 main.go:141] libmachine: (embed-certs-136195) Found IP for machine: 192.168.72.199
	I0617 12:01:29.376407  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has current primary IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.376430  165060 main.go:141] libmachine: (embed-certs-136195) Reserving static IP address...
	I0617 12:01:29.376754  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "embed-certs-136195", mac: "52:54:00:f2:27:84", ip: "192.168.72.199"} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.376788  165060 main.go:141] libmachine: (embed-certs-136195) Reserved static IP address: 192.168.72.199
	I0617 12:01:29.376800  165060 main.go:141] libmachine: (embed-certs-136195) DBG | skip adding static IP to network mk-embed-certs-136195 - found existing host DHCP lease matching {name: "embed-certs-136195", mac: "52:54:00:f2:27:84", ip: "192.168.72.199"}
	I0617 12:01:29.376811  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Getting to WaitForSSH function...
	I0617 12:01:29.376820  165060 main.go:141] libmachine: (embed-certs-136195) Waiting for SSH to be available...
	I0617 12:01:29.378811  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.379121  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.379151  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.379289  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Using SSH client type: external
	I0617 12:01:29.379321  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa (-rw-------)
	I0617 12:01:29.379354  165060 main.go:141] libmachine: (embed-certs-136195) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.199 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:01:29.379368  165060 main.go:141] libmachine: (embed-certs-136195) DBG | About to run SSH command:
	I0617 12:01:29.379381  165060 main.go:141] libmachine: (embed-certs-136195) DBG | exit 0
	I0617 12:01:29.503819  165060 main.go:141] libmachine: (embed-certs-136195) DBG | SSH cmd err, output: <nil>: 
	I0617 12:01:29.504207  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetConfigRaw
	I0617 12:01:29.504827  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:29.507277  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.507601  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.507635  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.507878  165060 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/config.json ...
	I0617 12:01:29.508102  165060 machine.go:94] provisionDockerMachine start ...
	I0617 12:01:29.508125  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:29.508333  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.510390  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.510636  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.510656  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.510761  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:29.510924  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.511082  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.511242  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:29.511404  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:29.511665  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:29.511680  165060 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:01:29.611728  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:01:29.611759  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetMachineName
	I0617 12:01:29.611996  165060 buildroot.go:166] provisioning hostname "embed-certs-136195"
	I0617 12:01:29.612025  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetMachineName
	I0617 12:01:29.612194  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.614719  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.615085  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.615110  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.615251  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:29.615425  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.615565  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.615685  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:29.615881  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:29.616066  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:29.616084  165060 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-136195 && echo "embed-certs-136195" | sudo tee /etc/hostname
	I0617 12:01:29.729321  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-136195
	
	I0617 12:01:29.729347  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.731968  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.732314  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.732352  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.732582  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:29.732820  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.733001  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.733157  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:29.733312  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:29.733471  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:29.733487  165060 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-136195' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-136195/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-136195' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:01:29.840083  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:29.840110  165060 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:01:29.840145  165060 buildroot.go:174] setting up certificates
	I0617 12:01:29.840180  165060 provision.go:84] configureAuth start
	I0617 12:01:29.840199  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetMachineName
	I0617 12:01:29.840488  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:29.843096  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.843446  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.843487  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.843687  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.845627  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.845914  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.845940  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.846021  165060 provision.go:143] copyHostCerts
	I0617 12:01:29.846096  165060 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:01:29.846106  165060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:01:29.846171  165060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:01:29.846267  165060 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:01:29.846275  165060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:01:29.846298  165060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:01:29.846359  165060 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:01:29.846366  165060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:01:29.846387  165060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:01:29.846456  165060 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.embed-certs-136195 san=[127.0.0.1 192.168.72.199 embed-certs-136195 localhost minikube]
	I0617 12:01:30.076596  165060 provision.go:177] copyRemoteCerts
	I0617 12:01:30.076657  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:01:30.076686  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.079269  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.079565  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.079588  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.079785  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.080016  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.080189  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.080316  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.161615  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:01:30.188790  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0617 12:01:30.215171  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:01:30.241310  165060 provision.go:87] duration metric: took 401.115469ms to configureAuth
	I0617 12:01:30.241332  165060 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:01:30.241529  165060 config.go:182] Loaded profile config "embed-certs-136195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:01:30.241602  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.244123  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.244427  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.244459  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.244584  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.244793  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.244999  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.245174  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.245340  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:30.245497  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:30.245512  165060 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:01:30.498156  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:01:30.498189  165060 machine.go:97] duration metric: took 990.071076ms to provisionDockerMachine
	I0617 12:01:30.498201  165060 start.go:293] postStartSetup for "embed-certs-136195" (driver="kvm2")
	I0617 12:01:30.498214  165060 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:01:30.498238  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.498580  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:01:30.498605  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.501527  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.501912  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.501941  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.502054  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.502257  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.502423  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.502578  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.583151  165060 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:01:30.587698  165060 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:01:30.587722  165060 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:01:30.587819  165060 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:01:30.587940  165060 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:01:30.588078  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:01:30.598234  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:30.622580  165060 start.go:296] duration metric: took 124.363651ms for postStartSetup
	I0617 12:01:30.622621  165060 fix.go:56] duration metric: took 19.705796191s for fixHost
	I0617 12:01:30.622645  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.625226  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.625637  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.625684  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.625821  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.626040  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.626229  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.626418  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.626613  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:30.626839  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:30.626862  165060 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:01:30.728365  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625690.704643527
	
	I0617 12:01:30.728389  165060 fix.go:216] guest clock: 1718625690.704643527
	I0617 12:01:30.728396  165060 fix.go:229] Guest: 2024-06-17 12:01:30.704643527 +0000 UTC Remote: 2024-06-17 12:01:30.622625631 +0000 UTC m=+287.310804086 (delta=82.017896ms)
	I0617 12:01:30.728416  165060 fix.go:200] guest clock delta is within tolerance: 82.017896ms
	I0617 12:01:30.728421  165060 start.go:83] releasing machines lock for "embed-certs-136195", held for 19.811634749s
	I0617 12:01:30.728445  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.728763  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:30.731414  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.731784  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.731816  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.731937  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.732504  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.732704  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.732761  165060 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:01:30.732826  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.732964  165060 ssh_runner.go:195] Run: cat /version.json
	I0617 12:01:30.732991  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.735854  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736049  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736278  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.736310  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.736334  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736397  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736579  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.736653  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.736777  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.736959  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.736972  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.737131  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.737188  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.737356  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.844295  165060 ssh_runner.go:195] Run: systemctl --version
	I0617 12:01:30.851958  165060 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:01:31.000226  165060 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:01:31.008322  165060 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:01:31.008397  165060 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:01:31.029520  165060 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:01:31.029547  165060 start.go:494] detecting cgroup driver to use...
	I0617 12:01:31.029617  165060 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:01:31.045505  165060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:01:31.059851  165060 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:01:31.059920  165060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:01:31.075011  165060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:01:31.089705  165060 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:01:31.204300  165060 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:01:31.342204  165060 docker.go:233] disabling docker service ...
	I0617 12:01:31.342290  165060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:01:31.356945  165060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:01:31.369786  165060 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:01:31.505817  165060 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:01:31.631347  165060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:01:31.646048  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:01:31.664854  165060 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:01:31.664923  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.677595  165060 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:01:31.677678  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.690164  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.701482  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.712488  165060 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:01:31.723994  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.736805  165060 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.755001  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.767226  165060 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:01:31.777894  165060 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:01:31.777954  165060 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:01:31.792644  165060 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:01:31.803267  165060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:31.920107  165060 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:01:32.067833  165060 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:01:32.067904  165060 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:01:32.072818  165060 start.go:562] Will wait 60s for crictl version
	I0617 12:01:32.072881  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:01:32.076782  165060 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:01:32.116635  165060 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:01:32.116709  165060 ssh_runner.go:195] Run: crio --version
	I0617 12:01:32.148094  165060 ssh_runner.go:195] Run: crio --version
	I0617 12:01:32.176924  165060 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:01:30.753437  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .Start
	I0617 12:01:30.753608  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring networks are active...
	I0617 12:01:30.754272  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring network default is active
	I0617 12:01:30.754600  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring network mk-old-k8s-version-003661 is active
	I0617 12:01:30.754967  165698 main.go:141] libmachine: (old-k8s-version-003661) Getting domain xml...
	I0617 12:01:30.755739  165698 main.go:141] libmachine: (old-k8s-version-003661) Creating domain...
	I0617 12:01:32.029080  165698 main.go:141] libmachine: (old-k8s-version-003661) Waiting to get IP...
	I0617 12:01:32.029902  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.030401  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.030477  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.030384  166594 retry.go:31] will retry after 191.846663ms: waiting for machine to come up
	I0617 12:01:32.223912  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.224300  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.224328  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.224276  166594 retry.go:31] will retry after 341.806498ms: waiting for machine to come up
	I0617 12:01:32.568066  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.568648  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.568682  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.568575  166594 retry.go:31] will retry after 359.779948ms: waiting for machine to come up
	I0617 12:01:32.930210  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.930652  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.930675  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.930604  166594 retry.go:31] will retry after 548.549499ms: waiting for machine to come up
	I0617 12:01:32.178076  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:32.181127  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:32.181524  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:32.181553  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:32.181778  165060 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0617 12:01:32.186998  165060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:32.203033  165060 kubeadm.go:877] updating cluster {Name:embed-certs-136195 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-136195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:01:32.203142  165060 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:01:32.203183  165060 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:32.245712  165060 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:01:32.245796  165060 ssh_runner.go:195] Run: which lz4
	I0617 12:01:32.250113  165060 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:01:32.254486  165060 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:01:32.254511  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0617 12:01:33.480493  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:33.480965  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:33.481004  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:33.480931  166594 retry.go:31] will retry after 636.044066ms: waiting for machine to come up
	I0617 12:01:34.118880  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:34.119361  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:34.119394  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:34.119299  166594 retry.go:31] will retry after 637.085777ms: waiting for machine to come up
	I0617 12:01:34.757614  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:34.758097  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:34.758126  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:34.758051  166594 retry.go:31] will retry after 921.652093ms: waiting for machine to come up
	I0617 12:01:35.681846  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:35.682324  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:35.682351  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:35.682269  166594 retry.go:31] will retry after 1.1106801s: waiting for machine to come up
	I0617 12:01:36.794411  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:36.794845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:36.794869  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:36.794793  166594 retry.go:31] will retry after 1.323395845s: waiting for machine to come up
	I0617 12:01:33.776867  165060 crio.go:462] duration metric: took 1.526763522s to copy over tarball
	I0617 12:01:33.776955  165060 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:01:35.994216  165060 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.217222149s)
	I0617 12:01:35.994246  165060 crio.go:469] duration metric: took 2.217348025s to extract the tarball
	I0617 12:01:35.994255  165060 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:01:36.034978  165060 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:36.087255  165060 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 12:01:36.087281  165060 cache_images.go:84] Images are preloaded, skipping loading
	I0617 12:01:36.087291  165060 kubeadm.go:928] updating node { 192.168.72.199 8443 v1.30.1 crio true true} ...
	I0617 12:01:36.087447  165060 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-136195 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-136195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:01:36.087551  165060 ssh_runner.go:195] Run: crio config
	I0617 12:01:36.130409  165060 cni.go:84] Creating CNI manager for ""
	I0617 12:01:36.130433  165060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:36.130449  165060 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:01:36.130479  165060 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.199 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-136195 NodeName:embed-certs-136195 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:01:36.130633  165060 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-136195"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.199
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.199"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:01:36.130724  165060 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:01:36.141027  165060 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:01:36.141110  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:01:36.150748  165060 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0617 12:01:36.167282  165060 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:01:36.183594  165060 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0617 12:01:36.202494  165060 ssh_runner.go:195] Run: grep 192.168.72.199	control-plane.minikube.internal$ /etc/hosts
	I0617 12:01:36.206515  165060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:36.218598  165060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:36.344280  165060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:36.361127  165060 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195 for IP: 192.168.72.199
	I0617 12:01:36.361152  165060 certs.go:194] generating shared ca certs ...
	I0617 12:01:36.361172  165060 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:36.361370  165060 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:01:36.361425  165060 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:01:36.361438  165060 certs.go:256] generating profile certs ...
	I0617 12:01:36.361557  165060 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/client.key
	I0617 12:01:36.361648  165060 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/apiserver.key.f7068429
	I0617 12:01:36.361696  165060 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/proxy-client.key
	I0617 12:01:36.361863  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:01:36.361913  165060 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:01:36.361925  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:01:36.361951  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:01:36.361984  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:01:36.362005  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:01:36.362041  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:36.362770  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:01:36.397257  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:01:36.422523  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:01:36.451342  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:01:36.485234  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0617 12:01:36.514351  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:01:36.544125  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:01:36.567574  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:01:36.590417  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:01:36.613174  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:01:36.636187  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:01:36.659365  165060 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:01:36.675981  165060 ssh_runner.go:195] Run: openssl version
	I0617 12:01:36.681694  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:01:36.692324  165060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:01:36.696871  165060 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:01:36.696938  165060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:01:36.702794  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:01:36.713372  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:01:36.724054  165060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:01:36.728505  165060 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:01:36.728566  165060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:01:36.734082  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:01:36.744542  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:01:36.755445  165060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:36.759880  165060 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:36.759922  165060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:36.765367  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:01:36.776234  165060 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:01:36.780822  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:01:36.786895  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:01:36.793358  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:01:36.800187  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:01:36.806591  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:01:36.812681  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:01:36.818814  165060 kubeadm.go:391] StartCluster: {Name:embed-certs-136195 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-136195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:01:36.818903  165060 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:01:36.818945  165060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:36.861839  165060 cri.go:89] found id: ""
	I0617 12:01:36.861920  165060 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:01:36.873500  165060 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:01:36.873529  165060 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:01:36.873551  165060 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:01:36.873602  165060 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:01:36.884767  165060 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:01:36.886013  165060 kubeconfig.go:125] found "embed-certs-136195" server: "https://192.168.72.199:8443"
	I0617 12:01:36.888144  165060 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:01:36.899204  165060 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.199
	I0617 12:01:36.899248  165060 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:01:36.899263  165060 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:01:36.899325  165060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:36.941699  165060 cri.go:89] found id: ""
	I0617 12:01:36.941782  165060 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:01:36.960397  165060 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:01:36.971254  165060 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:01:36.971276  165060 kubeadm.go:156] found existing configuration files:
	
	I0617 12:01:36.971333  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:01:36.981367  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:01:36.981448  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:01:36.991878  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:01:37.001741  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:01:37.001816  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:01:37.012170  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:01:37.021914  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:01:37.021979  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:01:37.031866  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:01:37.041657  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:01:37.041706  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:01:37.051440  165060 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:01:37.062543  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:37.175190  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:37.872053  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:38.085732  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:38.146895  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:38.208633  165060 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:01:38.208898  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:38.119805  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:38.297858  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:38.297905  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:38.120293  166594 retry.go:31] will retry after 1.769592858s: waiting for machine to come up
	I0617 12:01:39.892495  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:39.893035  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:39.893065  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:39.892948  166594 retry.go:31] will retry after 1.954570801s: waiting for machine to come up
	I0617 12:01:41.849587  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:41.850111  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:41.850140  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:41.850067  166594 retry.go:31] will retry after 3.44879626s: waiting for machine to come up
	I0617 12:01:38.708936  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:39.209014  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:39.709765  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:39.728309  165060 api_server.go:72] duration metric: took 1.519672652s to wait for apiserver process to appear ...
	I0617 12:01:39.728342  165060 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:01:39.728369  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:42.756054  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:01:42.756089  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:01:42.756105  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:42.797646  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:01:42.797689  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:01:43.229201  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:43.233440  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:01:43.233467  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:01:43.728490  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:43.741000  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:01:43.741037  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:01:44.228634  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:44.232839  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 200:
	ok
	I0617 12:01:44.238582  165060 api_server.go:141] control plane version: v1.30.1
	I0617 12:01:44.238606  165060 api_server.go:131] duration metric: took 4.510256755s to wait for apiserver health ...
	I0617 12:01:44.238615  165060 cni.go:84] Creating CNI manager for ""
	I0617 12:01:44.238622  165060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:44.240569  165060 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:01:44.241963  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:01:44.253143  165060 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:01:44.286772  165060 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:01:44.295697  165060 system_pods.go:59] 8 kube-system pods found
	I0617 12:01:44.295736  165060 system_pods.go:61] "coredns-7db6d8ff4d-9bbjg" [1ba0eee5-436e-4c83-b5ce-3c907d66b641] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0617 12:01:44.295744  165060 system_pods.go:61] "etcd-embed-certs-136195" [6dc81a80-c56b-4517-af82-c450cf9578f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0617 12:01:44.295757  165060 system_pods.go:61] "kube-apiserver-embed-certs-136195" [bd61a715-2471-4dca-aa48-a157531ebd6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 12:01:44.295763  165060 system_pods.go:61] "kube-controller-manager-embed-certs-136195" [194db4b0-75c2-4905-8e4d-813185497b51] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 12:01:44.295768  165060 system_pods.go:61] "kube-proxy-25d5n" [52b6d09a-899f-40c4-b1f3-7842ae755165] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0617 12:01:44.295774  165060 system_pods.go:61] "kube-scheduler-embed-certs-136195" [b04d3798-f465-4f82-9ec7-777ea62d5b94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 12:01:44.295782  165060 system_pods.go:61] "metrics-server-569cc877fc-dmhfs" [31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:01:44.295788  165060 system_pods.go:61] "storage-provisioner" [4b04a38a-5006-4496-a24d-0940029193de] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 12:01:44.295797  165060 system_pods.go:74] duration metric: took 9.004741ms to wait for pod list to return data ...
	I0617 12:01:44.295811  165060 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:01:44.298934  165060 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:01:44.298968  165060 node_conditions.go:123] node cpu capacity is 2
	I0617 12:01:44.298989  165060 node_conditions.go:105] duration metric: took 3.172465ms to run NodePressure ...
	I0617 12:01:44.299027  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:44.565943  165060 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 12:01:44.570796  165060 kubeadm.go:733] kubelet initialised
	I0617 12:01:44.570825  165060 kubeadm.go:734] duration metric: took 4.851024ms waiting for restarted kubelet to initialise ...
	I0617 12:01:44.570836  165060 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:01:44.575565  165060 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.582180  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.582209  165060 pod_ready.go:81] duration metric: took 6.620747ms for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.582221  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.582231  165060 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.586828  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "etcd-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.586850  165060 pod_ready.go:81] duration metric: took 4.61059ms for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.586859  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "etcd-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.586866  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.591162  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.591189  165060 pod_ready.go:81] duration metric: took 4.316651ms for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.591197  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.591204  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.690269  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.690301  165060 pod_ready.go:81] duration metric: took 99.088803ms for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.690310  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.690317  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:45.089616  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-proxy-25d5n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.089640  165060 pod_ready.go:81] duration metric: took 399.31511ms for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:45.089649  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-proxy-25d5n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.089656  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:45.491031  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.491058  165060 pod_ready.go:81] duration metric: took 401.395966ms for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:45.491068  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.491074  165060 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:45.890606  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.890633  165060 pod_ready.go:81] duration metric: took 399.550946ms for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:45.890644  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.890650  165060 pod_ready.go:38] duration metric: took 1.319802914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:01:45.890669  165060 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:01:45.903900  165060 ops.go:34] apiserver oom_adj: -16
	I0617 12:01:45.903936  165060 kubeadm.go:591] duration metric: took 9.03037731s to restartPrimaryControlPlane
	I0617 12:01:45.903950  165060 kubeadm.go:393] duration metric: took 9.085142288s to StartCluster
	I0617 12:01:45.903974  165060 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:45.904063  165060 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:01:45.905636  165060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:45.905908  165060 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:01:45.907817  165060 out.go:177] * Verifying Kubernetes components...
	I0617 12:01:45.905981  165060 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:01:45.907852  165060 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-136195"
	I0617 12:01:45.907880  165060 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-136195"
	W0617 12:01:45.907890  165060 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:01:45.907903  165060 addons.go:69] Setting default-storageclass=true in profile "embed-certs-136195"
	I0617 12:01:45.906085  165060 config.go:182] Loaded profile config "embed-certs-136195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:01:45.909296  165060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:45.907923  165060 host.go:66] Checking if "embed-certs-136195" exists ...
	I0617 12:01:45.907924  165060 addons.go:69] Setting metrics-server=true in profile "embed-certs-136195"
	I0617 12:01:45.909472  165060 addons.go:234] Setting addon metrics-server=true in "embed-certs-136195"
	W0617 12:01:45.909481  165060 addons.go:243] addon metrics-server should already be in state true
	I0617 12:01:45.909506  165060 host.go:66] Checking if "embed-certs-136195" exists ...
	I0617 12:01:45.907954  165060 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-136195"
	I0617 12:01:45.909776  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.909822  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.909836  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.909861  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.909841  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.909928  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.925250  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36545
	I0617 12:01:45.925500  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38767
	I0617 12:01:45.925708  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.925929  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.926262  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.926282  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.926420  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.926445  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.926637  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.926728  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.927142  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.927171  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.927206  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.927236  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.929198  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
	I0617 12:01:45.929658  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.930137  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.930159  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.930465  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.930661  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.934085  165060 addons.go:234] Setting addon default-storageclass=true in "embed-certs-136195"
	W0617 12:01:45.934107  165060 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:01:45.934139  165060 host.go:66] Checking if "embed-certs-136195" exists ...
	I0617 12:01:45.934534  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.934579  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.944472  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I0617 12:01:45.945034  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.945712  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.945741  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.946105  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.946343  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.946673  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43225
	I0617 12:01:45.947007  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.947706  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.947725  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.948027  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.948228  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.948359  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:45.950451  165060 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:01:45.951705  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:01:45.951719  165060 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:01:45.951735  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:45.949626  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:45.951588  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43695
	I0617 12:01:45.953222  165060 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:45.954471  165060 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:01:45.952290  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.954494  165060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:01:45.954514  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:45.955079  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.955098  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.955123  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.955478  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.955718  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:45.955757  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.955924  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:45.956099  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.956106  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:45.956147  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.956374  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:45.956507  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:45.957756  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.958184  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:45.958206  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.958335  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:45.958505  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:45.958680  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:45.958825  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:45.977247  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39751
	I0617 12:01:45.977663  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.978179  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.978203  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.978524  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.978711  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.980425  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:45.980601  165060 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:01:45.980616  165060 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:01:45.980630  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:45.983633  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.984088  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:45.984105  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.984258  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:45.984377  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:45.984505  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:45.984661  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:46.093292  165060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:46.112779  165060 node_ready.go:35] waiting up to 6m0s for node "embed-certs-136195" to be "Ready" ...
	I0617 12:01:46.182239  165060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:01:46.248534  165060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:01:46.286637  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:01:46.286662  165060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:01:46.313951  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:01:46.313981  165060 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:01:46.337155  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:01:46.337186  165060 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:01:46.389025  165060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:01:46.548086  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:46.548106  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:46.548442  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:46.548461  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:46.548471  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:46.548481  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:46.548485  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:46.548727  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:46.548744  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:46.548764  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:46.554199  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:46.554218  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:46.554454  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:46.554469  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:46.554480  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.142290  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.142321  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.142629  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.142658  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.142671  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.142676  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.142692  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.142943  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.142971  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.142985  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.216339  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.216366  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.216658  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.216679  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.216690  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.216700  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.216709  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.216931  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.216967  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.216982  165060 addons.go:475] Verifying addon metrics-server=true in "embed-certs-136195"
	I0617 12:01:47.219627  165060 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0617 12:01:45.300413  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:45.300848  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:45.300878  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:45.300794  166594 retry.go:31] will retry after 3.892148485s: waiting for machine to come up
	I0617 12:01:47.220905  165060 addons.go:510] duration metric: took 1.314925386s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0617 12:01:48.116197  165060 node_ready.go:53] node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:50.500448  166103 start.go:364] duration metric: took 2m12.970832528s to acquireMachinesLock for "default-k8s-diff-port-991309"
	I0617 12:01:50.500511  166103 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:50.500534  166103 fix.go:54] fixHost starting: 
	I0617 12:01:50.500980  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:50.501018  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:50.517593  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43641
	I0617 12:01:50.518035  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:50.518600  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:01:50.518635  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:50.519051  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:50.519296  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:01:50.519502  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:01:50.521095  166103 fix.go:112] recreateIfNeeded on default-k8s-diff-port-991309: state=Stopped err=<nil>
	I0617 12:01:50.521123  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	W0617 12:01:50.521307  166103 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:50.522795  166103 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-991309" ...
	I0617 12:01:49.197189  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.197671  165698 main.go:141] libmachine: (old-k8s-version-003661) Found IP for machine: 192.168.61.164
	I0617 12:01:49.197697  165698 main.go:141] libmachine: (old-k8s-version-003661) Reserving static IP address...
	I0617 12:01:49.197714  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has current primary IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.198147  165698 main.go:141] libmachine: (old-k8s-version-003661) Reserved static IP address: 192.168.61.164
	I0617 12:01:49.198175  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "old-k8s-version-003661", mac: "52:54:00:76:66:a0", ip: "192.168.61.164"} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.198185  165698 main.go:141] libmachine: (old-k8s-version-003661) Waiting for SSH to be available...
	I0617 12:01:49.198217  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | skip adding static IP to network mk-old-k8s-version-003661 - found existing host DHCP lease matching {name: "old-k8s-version-003661", mac: "52:54:00:76:66:a0", ip: "192.168.61.164"}
	I0617 12:01:49.198227  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Getting to WaitForSSH function...
	I0617 12:01:49.200478  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.200907  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.200935  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.201088  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Using SSH client type: external
	I0617 12:01:49.201116  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa (-rw-------)
	I0617 12:01:49.201154  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:01:49.201169  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | About to run SSH command:
	I0617 12:01:49.201183  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | exit 0
	I0617 12:01:49.323763  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | SSH cmd err, output: <nil>: 
	I0617 12:01:49.324127  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetConfigRaw
	I0617 12:01:49.324835  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:49.327217  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.327628  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.327660  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.327891  165698 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/config.json ...
	I0617 12:01:49.328097  165698 machine.go:94] provisionDockerMachine start ...
	I0617 12:01:49.328120  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:49.328365  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.330587  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.330992  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.331033  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.331160  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.331324  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.331490  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.331637  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.331824  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.332037  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.332049  165698 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:01:49.432170  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:01:49.432201  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.432498  165698 buildroot.go:166] provisioning hostname "old-k8s-version-003661"
	I0617 12:01:49.432524  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.432730  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.435845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.436276  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.436317  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.436507  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.436708  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.436909  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.437074  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.437289  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.437496  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.437510  165698 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-003661 && echo "old-k8s-version-003661" | sudo tee /etc/hostname
	I0617 12:01:49.550158  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-003661
	
	I0617 12:01:49.550187  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.553141  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.553509  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.553539  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.553737  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.553943  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.554141  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.554298  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.554520  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.554759  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.554787  165698 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-003661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-003661/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-003661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:01:49.661049  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:49.661079  165698 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:01:49.661106  165698 buildroot.go:174] setting up certificates
	I0617 12:01:49.661115  165698 provision.go:84] configureAuth start
	I0617 12:01:49.661124  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.661452  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:49.664166  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.664561  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.664591  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.664723  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.666845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.667114  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.667158  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.667287  165698 provision.go:143] copyHostCerts
	I0617 12:01:49.667377  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:01:49.667387  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:01:49.667440  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:01:49.667561  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:01:49.667571  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:01:49.667594  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:01:49.667649  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:01:49.667656  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:01:49.667674  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:01:49.667722  165698 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-003661 san=[127.0.0.1 192.168.61.164 localhost minikube old-k8s-version-003661]
	I0617 12:01:49.853671  165698 provision.go:177] copyRemoteCerts
	I0617 12:01:49.853736  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:01:49.853767  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.856171  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.856540  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.856577  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.856737  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.857071  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.857220  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.857360  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:49.938626  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:01:49.964401  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0617 12:01:49.988397  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0617 12:01:50.013356  165698 provision.go:87] duration metric: took 352.227211ms to configureAuth
	I0617 12:01:50.013382  165698 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:01:50.013581  165698 config.go:182] Loaded profile config "old-k8s-version-003661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0617 12:01:50.013689  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.016168  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.016514  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.016548  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.016657  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.016847  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.017025  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.017152  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.017300  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:50.017483  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:50.017505  165698 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:01:50.280037  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:01:50.280065  165698 machine.go:97] duration metric: took 951.954687ms to provisionDockerMachine
	I0617 12:01:50.280076  165698 start.go:293] postStartSetup for "old-k8s-version-003661" (driver="kvm2")
	I0617 12:01:50.280086  165698 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:01:50.280102  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.280467  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:01:50.280506  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.283318  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.283657  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.283684  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.283874  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.284106  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.284279  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.284402  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.362452  165698 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:01:50.366699  165698 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:01:50.366726  165698 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:01:50.366788  165698 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:01:50.366878  165698 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:01:50.367004  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:01:50.376706  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:50.399521  165698 start.go:296] duration metric: took 119.43167ms for postStartSetup
	I0617 12:01:50.399558  165698 fix.go:56] duration metric: took 19.670946478s for fixHost
	I0617 12:01:50.399578  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.402079  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.402465  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.402500  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.402649  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.402835  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.402994  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.403138  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.403321  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:50.403529  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:50.403541  165698 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:01:50.500267  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625710.471154465
	
	I0617 12:01:50.500294  165698 fix.go:216] guest clock: 1718625710.471154465
	I0617 12:01:50.500304  165698 fix.go:229] Guest: 2024-06-17 12:01:50.471154465 +0000 UTC Remote: 2024-06-17 12:01:50.399561534 +0000 UTC m=+212.458541959 (delta=71.592931ms)
	I0617 12:01:50.500350  165698 fix.go:200] guest clock delta is within tolerance: 71.592931ms
	I0617 12:01:50.500355  165698 start.go:83] releasing machines lock for "old-k8s-version-003661", held for 19.771784344s
	I0617 12:01:50.500380  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.500648  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:50.503346  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.503749  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.503776  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.503974  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504536  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504676  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504750  165698 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:01:50.504801  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.504861  165698 ssh_runner.go:195] Run: cat /version.json
	I0617 12:01:50.504890  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.507577  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.507736  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508013  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.508041  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508176  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.508200  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508205  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.508335  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.508419  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.508499  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.508580  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.508691  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.508717  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.508830  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.585030  165698 ssh_runner.go:195] Run: systemctl --version
	I0617 12:01:50.612492  165698 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:01:50.765842  165698 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:01:50.773214  165698 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:01:50.773288  165698 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:01:50.793397  165698 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:01:50.793424  165698 start.go:494] detecting cgroup driver to use...
	I0617 12:01:50.793499  165698 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:01:50.811531  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:01:50.826223  165698 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:01:50.826289  165698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:01:50.840517  165698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:01:50.854788  165698 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:01:50.970328  165698 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:01:51.125815  165698 docker.go:233] disabling docker service ...
	I0617 12:01:51.125893  165698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:01:51.146368  165698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:01:51.161459  165698 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:01:51.346032  165698 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:01:51.503395  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:01:51.521021  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:01:51.543851  165698 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0617 12:01:51.543905  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.556230  165698 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:01:51.556309  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.573061  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.588663  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.601086  165698 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:01:51.617347  165698 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:01:51.634502  165698 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:01:51.634635  165698 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:01:51.652813  165698 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:01:51.665145  165698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:51.826713  165698 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:01:51.981094  165698 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:01:51.981186  165698 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:01:51.986026  165698 start.go:562] Will wait 60s for crictl version
	I0617 12:01:51.986091  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:51.990253  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:01:52.032543  165698 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:01:52.032631  165698 ssh_runner.go:195] Run: crio --version
	I0617 12:01:52.063904  165698 ssh_runner.go:195] Run: crio --version
	I0617 12:01:52.097158  165698 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0617 12:01:50.524130  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Start
	I0617 12:01:50.524321  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Ensuring networks are active...
	I0617 12:01:50.524939  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Ensuring network default is active
	I0617 12:01:50.525300  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Ensuring network mk-default-k8s-diff-port-991309 is active
	I0617 12:01:50.527342  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Getting domain xml...
	I0617 12:01:50.528126  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Creating domain...
	I0617 12:01:51.864887  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting to get IP...
	I0617 12:01:51.865835  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:51.866246  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:51.866328  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:51.866228  166802 retry.go:31] will retry after 200.163407ms: waiting for machine to come up
	I0617 12:01:52.067708  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.068164  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.068193  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:52.068119  166802 retry.go:31] will retry after 364.503903ms: waiting for machine to come up
	I0617 12:01:52.098675  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:52.102187  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:52.102572  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:52.102603  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:52.102823  165698 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0617 12:01:52.107573  165698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:52.121312  165698 kubeadm.go:877] updating cluster {Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.164 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:01:52.121448  165698 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0617 12:01:52.121515  165698 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:52.181796  165698 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0617 12:01:52.181891  165698 ssh_runner.go:195] Run: which lz4
	I0617 12:01:52.186827  165698 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:01:52.191806  165698 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:01:52.191875  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0617 12:01:50.116573  165060 node_ready.go:53] node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:52.122162  165060 node_ready.go:53] node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:53.117556  165060 node_ready.go:49] node "embed-certs-136195" has status "Ready":"True"
	I0617 12:01:53.117589  165060 node_ready.go:38] duration metric: took 7.004769746s for node "embed-certs-136195" to be "Ready" ...
	I0617 12:01:53.117598  165060 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:01:53.125606  165060 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:53.131618  165060 pod_ready.go:92] pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:53.131643  165060 pod_ready.go:81] duration metric: took 6.000929ms for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:53.131654  165060 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:52.434791  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.435584  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.435740  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:52.435665  166802 retry.go:31] will retry after 486.514518ms: waiting for machine to come up
	I0617 12:01:52.924190  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.924819  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.924845  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:52.924681  166802 retry.go:31] will retry after 520.971301ms: waiting for machine to come up
	I0617 12:01:53.447437  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:53.447965  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:53.447995  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:53.447919  166802 retry.go:31] will retry after 622.761044ms: waiting for machine to come up
	I0617 12:01:54.072700  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.073170  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.073202  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:54.073112  166802 retry.go:31] will retry after 671.940079ms: waiting for machine to come up
	I0617 12:01:54.746830  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.747342  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.747372  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:54.747310  166802 retry.go:31] will retry after 734.856022ms: waiting for machine to come up
	I0617 12:01:55.484571  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:55.485127  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:55.485157  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:55.485066  166802 retry.go:31] will retry after 1.198669701s: waiting for machine to come up
	I0617 12:01:56.685201  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:56.685468  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:56.685493  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:56.685440  166802 retry.go:31] will retry after 1.562509853s: waiting for machine to come up
	I0617 12:01:54.026903  165698 crio.go:462] duration metric: took 1.840117639s to copy over tarball
	I0617 12:01:54.027003  165698 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:01:57.049870  165698 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.022814584s)
	I0617 12:01:57.049904  165698 crio.go:469] duration metric: took 3.022967677s to extract the tarball
	I0617 12:01:57.049914  165698 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:01:57.094589  165698 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:57.133299  165698 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0617 12:01:57.133331  165698 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.133451  165698 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.133456  165698 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.133477  165698 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.133530  165698 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.133626  165698 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.135990  165698 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.135994  165698 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.135985  165698 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0617 12:01:57.136041  165698 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.136041  165698 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.289271  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.299061  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.322581  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.336462  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.337619  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.350335  165698 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0617 12:01:57.350395  165698 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.350448  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.357972  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0617 12:01:57.391517  165698 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0617 12:01:57.391563  165698 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.391640  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.419438  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.442111  165698 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0617 12:01:57.442154  165698 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.442200  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.450145  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.485873  165698 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0617 12:01:57.485922  165698 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0617 12:01:57.485942  165698 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.485957  165698 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.485996  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.486003  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.486053  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.490584  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.490669  165698 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0617 12:01:57.490714  165698 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0617 12:01:57.490755  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.551564  165698 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0617 12:01:57.551597  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.551619  165698 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.551662  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.660683  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0617 12:01:57.660732  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.660799  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0617 12:01:57.660856  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0617 12:01:57.660734  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.660903  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0617 12:01:57.660930  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.753965  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0617 12:01:57.753981  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0617 12:01:57.754069  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0617 12:01:57.754069  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0617 12:01:57.754146  165698 cache_images.go:92] duration metric: took 620.797178ms to LoadCachedImages
	W0617 12:01:57.754271  165698 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0617 12:01:57.754292  165698 kubeadm.go:928] updating node { 192.168.61.164 8443 v1.20.0 crio true true} ...
	I0617 12:01:57.754415  165698 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-003661 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:01:57.754489  165698 ssh_runner.go:195] Run: crio config
	I0617 12:01:57.807120  165698 cni.go:84] Creating CNI manager for ""
	I0617 12:01:57.807144  165698 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:57.807158  165698 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:01:57.807182  165698 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.164 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-003661 NodeName:old-k8s-version-003661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0617 12:01:57.807370  165698 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-003661"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:01:57.807437  165698 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0617 12:01:57.817865  165698 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:01:57.817940  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:01:57.829796  165698 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0617 12:01:57.847758  165698 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:01:57.866182  165698 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0617 12:01:57.884500  165698 ssh_runner.go:195] Run: grep 192.168.61.164	control-plane.minikube.internal$ /etc/hosts
	I0617 12:01:57.888852  165698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:57.902176  165698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:55.138418  165060 pod_ready.go:102] pod "etcd-embed-certs-136195" in "kube-system" namespace has status "Ready":"False"
	I0617 12:01:55.641014  165060 pod_ready.go:92] pod "etcd-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:55.641047  165060 pod_ready.go:81] duration metric: took 2.509383461s for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:55.641061  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.151759  165060 pod_ready.go:92] pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.151788  165060 pod_ready.go:81] duration metric: took 510.718192ms for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.152027  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.157234  165060 pod_ready.go:92] pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.157260  165060 pod_ready.go:81] duration metric: took 5.220069ms for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.157273  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.161767  165060 pod_ready.go:92] pod "kube-proxy-25d5n" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.161787  165060 pod_ready.go:81] duration metric: took 4.50732ms for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.161796  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.717763  165060 pod_ready.go:92] pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.717865  165060 pod_ready.go:81] duration metric: took 556.058292ms for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.717892  165060 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:58.249594  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:58.250033  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:58.250069  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:58.250019  166802 retry.go:31] will retry after 2.154567648s: waiting for machine to come up
	I0617 12:02:00.406269  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:00.406668  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:02:00.406702  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:02:00.406615  166802 retry.go:31] will retry after 2.065044206s: waiting for machine to come up
	I0617 12:01:58.049361  165698 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:58.067893  165698 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661 for IP: 192.168.61.164
	I0617 12:01:58.067924  165698 certs.go:194] generating shared ca certs ...
	I0617 12:01:58.067945  165698 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:58.068162  165698 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:01:58.068221  165698 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:01:58.068236  165698 certs.go:256] generating profile certs ...
	I0617 12:01:58.068352  165698 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.key
	I0617 12:01:58.068438  165698 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key.6c1f259c
	I0617 12:01:58.068493  165698 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.key
	I0617 12:01:58.068647  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:01:58.068690  165698 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:01:58.068704  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:01:58.068743  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:01:58.068790  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:01:58.068824  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:01:58.068877  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:58.069548  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:01:58.109048  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:01:58.134825  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:01:58.159910  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:01:58.191108  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0617 12:01:58.217407  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:01:58.242626  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:01:58.267261  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0617 12:01:58.291562  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:01:58.321848  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:01:58.352361  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:01:58.379343  165698 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:01:58.399146  165698 ssh_runner.go:195] Run: openssl version
	I0617 12:01:58.405081  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:01:58.415471  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.420046  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.420099  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.425886  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:01:58.436575  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:01:58.447166  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.451523  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.451582  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.457670  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:01:58.468667  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:01:58.479095  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.483744  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.483796  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.489520  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:01:58.500298  165698 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:01:58.504859  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:01:58.510619  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:01:58.516819  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:01:58.522837  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:01:58.528736  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:01:58.534585  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:01:58.540464  165698 kubeadm.go:391] StartCluster: {Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.164 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:01:58.540549  165698 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:01:58.540624  165698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:58.583638  165698 cri.go:89] found id: ""
	I0617 12:01:58.583724  165698 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:01:58.594266  165698 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:01:58.594290  165698 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:01:58.594295  165698 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:01:58.594354  165698 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:01:58.604415  165698 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:01:58.605367  165698 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-003661" does not appear in /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:01:58.605949  165698 kubeconfig.go:62] /home/jenkins/minikube-integration/19084-112967/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-003661" cluster setting kubeconfig missing "old-k8s-version-003661" context setting]
	I0617 12:01:58.606833  165698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:58.662621  165698 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:01:58.673813  165698 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.164
	I0617 12:01:58.673848  165698 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:01:58.673863  165698 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:01:58.673907  165698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:58.712607  165698 cri.go:89] found id: ""
	I0617 12:01:58.712703  165698 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:01:58.731676  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:01:58.741645  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:01:58.741666  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:01:58.741709  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:01:58.750871  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:01:58.750931  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:01:58.760545  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:01:58.769701  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:01:58.769776  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:01:58.779348  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:01:58.788507  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:01:58.788566  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:01:58.799220  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:01:58.808403  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:01:58.808468  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:01:58.818169  165698 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:01:58.828079  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:58.962164  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:59.679319  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:59.903216  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:00.026243  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:00.126201  165698 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:00.126314  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:00.627227  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:01.126836  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:01.626524  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:02.126619  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:02.626434  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:58.727229  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:01.226021  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:02.473035  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:02.473477  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:02:02.473505  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:02:02.473458  166802 retry.go:31] will retry after 3.132988331s: waiting for machine to come up
	I0617 12:02:05.607981  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:05.608354  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:02:05.608391  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:02:05.608310  166802 retry.go:31] will retry after 3.312972752s: waiting for machine to come up
	I0617 12:02:03.126687  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:03.626469  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:04.126347  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:04.626548  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:05.127142  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:05.626937  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:06.126479  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:06.626466  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:07.126806  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:07.626814  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:03.724216  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:06.224335  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:08.224842  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:10.217135  164809 start.go:364] duration metric: took 54.298812889s to acquireMachinesLock for "no-preload-152830"
	I0617 12:02:10.217192  164809 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:02:10.217204  164809 fix.go:54] fixHost starting: 
	I0617 12:02:10.217633  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:10.217673  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:10.238636  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44149
	I0617 12:02:10.239091  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:10.239596  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:02:10.239622  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:10.239997  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:10.240214  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:10.240397  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:02:10.242141  164809 fix.go:112] recreateIfNeeded on no-preload-152830: state=Stopped err=<nil>
	I0617 12:02:10.242162  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	W0617 12:02:10.242324  164809 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:02:10.244888  164809 out.go:177] * Restarting existing kvm2 VM for "no-preload-152830" ...
	I0617 12:02:08.922547  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.922966  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Found IP for machine: 192.168.50.125
	I0617 12:02:08.922996  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Reserving static IP address...
	I0617 12:02:08.923013  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has current primary IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.923437  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-991309", mac: "52:54:00:4e:6e:f5", ip: "192.168.50.125"} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:08.923484  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Reserved static IP address: 192.168.50.125
	I0617 12:02:08.923514  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | skip adding static IP to network mk-default-k8s-diff-port-991309 - found existing host DHCP lease matching {name: "default-k8s-diff-port-991309", mac: "52:54:00:4e:6e:f5", ip: "192.168.50.125"}
	I0617 12:02:08.923533  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Getting to WaitForSSH function...
	I0617 12:02:08.923550  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for SSH to be available...
	I0617 12:02:08.925667  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.926017  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:08.926050  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.926203  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Using SSH client type: external
	I0617 12:02:08.926228  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa (-rw-------)
	I0617 12:02:08.926269  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:02:08.926290  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | About to run SSH command:
	I0617 12:02:08.926316  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | exit 0
	I0617 12:02:09.051973  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | SSH cmd err, output: <nil>: 
	I0617 12:02:09.052329  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetConfigRaw
	I0617 12:02:09.052946  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:09.055156  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.055509  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.055541  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.055748  166103 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/config.json ...
	I0617 12:02:09.055940  166103 machine.go:94] provisionDockerMachine start ...
	I0617 12:02:09.055960  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:09.056162  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.058451  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.058826  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.058860  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.058961  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.059155  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.059289  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.059440  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.059583  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.059796  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.059813  166103 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:02:09.163974  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:02:09.164020  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetMachineName
	I0617 12:02:09.164281  166103 buildroot.go:166] provisioning hostname "default-k8s-diff-port-991309"
	I0617 12:02:09.164312  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetMachineName
	I0617 12:02:09.164499  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.167194  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.167606  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.167632  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.167856  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.168097  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.168285  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.168414  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.168571  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.168795  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.168811  166103 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-991309 && echo "default-k8s-diff-port-991309" | sudo tee /etc/hostname
	I0617 12:02:09.290435  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-991309
	
	I0617 12:02:09.290470  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.293538  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.293879  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.293902  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.294132  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.294361  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.294574  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.294753  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.294943  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.295188  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.295209  166103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-991309' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-991309/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-991309' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:02:09.408702  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:02:09.408736  166103 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:02:09.408777  166103 buildroot.go:174] setting up certificates
	I0617 12:02:09.408789  166103 provision.go:84] configureAuth start
	I0617 12:02:09.408798  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetMachineName
	I0617 12:02:09.409122  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:09.411936  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.412304  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.412335  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.412522  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.414598  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.414914  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.414942  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.415054  166103 provision.go:143] copyHostCerts
	I0617 12:02:09.415121  166103 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:02:09.415132  166103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:02:09.415182  166103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:02:09.415264  166103 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:02:09.415271  166103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:02:09.415290  166103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:02:09.415344  166103 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:02:09.415353  166103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:02:09.415378  166103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:02:09.415439  166103 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-991309 san=[127.0.0.1 192.168.50.125 default-k8s-diff-port-991309 localhost minikube]
	I0617 12:02:09.534010  166103 provision.go:177] copyRemoteCerts
	I0617 12:02:09.534082  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:02:09.534121  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.536707  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.537143  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.537176  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.537352  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.537516  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.537687  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.537840  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:09.622292  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0617 12:02:09.652653  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:02:09.676801  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:02:09.700701  166103 provision.go:87] duration metric: took 291.898478ms to configureAuth
	I0617 12:02:09.700734  166103 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:02:09.700931  166103 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:02:09.701023  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.703710  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.704138  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.704171  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.704330  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.704537  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.704730  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.704895  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.705058  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.705243  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.705262  166103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:02:09.974077  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:02:09.974109  166103 machine.go:97] duration metric: took 918.156221ms to provisionDockerMachine
	I0617 12:02:09.974120  166103 start.go:293] postStartSetup for "default-k8s-diff-port-991309" (driver="kvm2")
	I0617 12:02:09.974131  166103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:02:09.974155  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:09.974502  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:02:09.974544  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.977677  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.978073  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.978097  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.978225  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.978407  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.978583  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.978734  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:10.067068  166103 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:02:10.071843  166103 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:02:10.071870  166103 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:02:10.071934  166103 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:02:10.072024  166103 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:02:10.072128  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:02:10.082041  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:10.107855  166103 start.go:296] duration metric: took 133.717924ms for postStartSetup
	I0617 12:02:10.107903  166103 fix.go:56] duration metric: took 19.607369349s for fixHost
	I0617 12:02:10.107932  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:10.110742  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.111135  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.111169  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.111294  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:10.111527  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.111674  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.111861  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:10.111980  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:10.112205  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:10.112220  166103 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:02:10.216945  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625730.186446687
	
	I0617 12:02:10.216973  166103 fix.go:216] guest clock: 1718625730.186446687
	I0617 12:02:10.216983  166103 fix.go:229] Guest: 2024-06-17 12:02:10.186446687 +0000 UTC Remote: 2024-06-17 12:02:10.107909348 +0000 UTC m=+152.716337101 (delta=78.537339ms)
	I0617 12:02:10.217033  166103 fix.go:200] guest clock delta is within tolerance: 78.537339ms
	I0617 12:02:10.217039  166103 start.go:83] releasing machines lock for "default-k8s-diff-port-991309", held for 19.716554323s
	I0617 12:02:10.217073  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.217363  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:10.220429  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.220897  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.220927  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.221083  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.221655  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.221870  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.221965  166103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:02:10.222026  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:10.222094  166103 ssh_runner.go:195] Run: cat /version.json
	I0617 12:02:10.222122  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:10.225337  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.225673  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.225710  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.225730  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.226015  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:10.226172  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.226202  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.226242  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.226363  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:10.226447  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:10.226508  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.226591  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:10.226687  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:10.226840  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:10.334316  166103 ssh_runner.go:195] Run: systemctl --version
	I0617 12:02:10.340584  166103 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:02:10.489359  166103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:02:10.497198  166103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:02:10.497267  166103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:02:10.517001  166103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:02:10.517032  166103 start.go:494] detecting cgroup driver to use...
	I0617 12:02:10.517110  166103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:02:10.536520  166103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:02:10.550478  166103 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:02:10.550542  166103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:02:10.564437  166103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:02:10.578554  166103 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:02:10.710346  166103 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:02:10.891637  166103 docker.go:233] disabling docker service ...
	I0617 12:02:10.891694  166103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:02:10.908300  166103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:02:10.921663  166103 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:02:11.062715  166103 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:02:11.201061  166103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:02:11.216120  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:02:11.237213  166103 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:02:11.237286  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.248171  166103 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:02:11.248238  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.259159  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.270217  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.280841  166103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:02:11.291717  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.302084  166103 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.319559  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.331992  166103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:02:11.342435  166103 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:02:11.342494  166103 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:02:11.357436  166103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:02:11.367406  166103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:11.493416  166103 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:02:11.629980  166103 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:02:11.630055  166103 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:02:11.636456  166103 start.go:562] Will wait 60s for crictl version
	I0617 12:02:11.636540  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:02:11.642817  166103 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:02:11.681563  166103 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:02:11.681655  166103 ssh_runner.go:195] Run: crio --version
	I0617 12:02:11.712576  166103 ssh_runner.go:195] Run: crio --version
	I0617 12:02:11.753826  166103 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:02:11.755256  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:11.758628  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:11.759006  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:11.759041  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:11.759252  166103 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0617 12:02:11.763743  166103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:11.780286  166103 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:02:11.780455  166103 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:02:11.780528  166103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:02:11.819396  166103 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:02:11.819481  166103 ssh_runner.go:195] Run: which lz4
	I0617 12:02:11.824047  166103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:02:11.828770  166103 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:02:11.828807  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0617 12:02:08.127233  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:08.626498  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:09.126712  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:09.627284  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.126446  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.627249  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:11.126428  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:11.626638  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:12.127091  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:12.627361  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.226209  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:12.227824  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:10.246388  164809 main.go:141] libmachine: (no-preload-152830) Calling .Start
	I0617 12:02:10.246608  164809 main.go:141] libmachine: (no-preload-152830) Ensuring networks are active...
	I0617 12:02:10.247397  164809 main.go:141] libmachine: (no-preload-152830) Ensuring network default is active
	I0617 12:02:10.247789  164809 main.go:141] libmachine: (no-preload-152830) Ensuring network mk-no-preload-152830 is active
	I0617 12:02:10.248192  164809 main.go:141] libmachine: (no-preload-152830) Getting domain xml...
	I0617 12:02:10.248869  164809 main.go:141] libmachine: (no-preload-152830) Creating domain...
	I0617 12:02:11.500721  164809 main.go:141] libmachine: (no-preload-152830) Waiting to get IP...
	I0617 12:02:11.501614  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:11.502169  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:11.502254  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:11.502131  166976 retry.go:31] will retry after 281.343691ms: waiting for machine to come up
	I0617 12:02:11.785597  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:11.786047  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:11.786082  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:11.785983  166976 retry.go:31] will retry after 303.221815ms: waiting for machine to come up
	I0617 12:02:12.090367  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:12.090919  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:12.090945  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:12.090826  166976 retry.go:31] will retry after 422.250116ms: waiting for machine to come up
	I0617 12:02:12.514456  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:12.515026  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:12.515055  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:12.515001  166976 retry.go:31] will retry after 513.394077ms: waiting for machine to come up
	I0617 12:02:13.029811  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:13.030495  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:13.030522  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:13.030449  166976 retry.go:31] will retry after 596.775921ms: waiting for machine to come up
	I0617 12:02:13.387031  166103 crio.go:462] duration metric: took 1.563017054s to copy over tarball
	I0617 12:02:13.387108  166103 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:02:15.664139  166103 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.276994761s)
	I0617 12:02:15.664177  166103 crio.go:469] duration metric: took 2.277117031s to extract the tarball
	I0617 12:02:15.664188  166103 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:02:15.703690  166103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:02:15.757605  166103 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 12:02:15.757634  166103 cache_images.go:84] Images are preloaded, skipping loading
	I0617 12:02:15.757644  166103 kubeadm.go:928] updating node { 192.168.50.125 8444 v1.30.1 crio true true} ...
	I0617 12:02:15.757784  166103 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-991309 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:02:15.757874  166103 ssh_runner.go:195] Run: crio config
	I0617 12:02:15.808350  166103 cni.go:84] Creating CNI manager for ""
	I0617 12:02:15.808380  166103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:15.808397  166103 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:02:15.808434  166103 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.125 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-991309 NodeName:default-k8s-diff-port-991309 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:02:15.808633  166103 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.125
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-991309"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:02:15.808709  166103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:02:15.818891  166103 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:02:15.818964  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:02:15.828584  166103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0617 12:02:15.846044  166103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:02:15.862572  166103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0617 12:02:15.880042  166103 ssh_runner.go:195] Run: grep 192.168.50.125	control-plane.minikube.internal$ /etc/hosts
	I0617 12:02:15.884470  166103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:15.897031  166103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:16.013826  166103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:02:16.030366  166103 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309 for IP: 192.168.50.125
	I0617 12:02:16.030391  166103 certs.go:194] generating shared ca certs ...
	I0617 12:02:16.030408  166103 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:16.030590  166103 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:02:16.030650  166103 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:02:16.030668  166103 certs.go:256] generating profile certs ...
	I0617 12:02:16.030793  166103 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/client.key
	I0617 12:02:16.030876  166103 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/apiserver.key.02769a34
	I0617 12:02:16.030919  166103 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/proxy-client.key
	I0617 12:02:16.031024  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:02:16.031051  166103 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:02:16.031060  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:02:16.031080  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:02:16.031103  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:02:16.031122  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:02:16.031179  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:16.031991  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:02:16.066789  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:02:16.094522  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:02:16.119693  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:02:16.155810  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0617 12:02:16.186788  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:02:16.221221  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:02:16.248948  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:02:16.273404  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:02:16.296958  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:02:16.320047  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:02:16.349598  166103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:02:16.367499  166103 ssh_runner.go:195] Run: openssl version
	I0617 12:02:16.373596  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:02:16.384778  166103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:02:16.389521  166103 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:02:16.389574  166103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:02:16.395523  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:02:16.406357  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:02:16.417139  166103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:02:16.421629  166103 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:02:16.421679  166103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:02:16.427323  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:02:16.438649  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:02:16.450042  166103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:16.454587  166103 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:16.454636  166103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:16.460677  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:02:16.472886  166103 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:02:16.477630  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:02:16.483844  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:02:16.490123  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:02:16.497606  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:02:16.504066  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:02:16.510597  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:02:16.518270  166103 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:02:16.518371  166103 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:02:16.518439  166103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:16.569103  166103 cri.go:89] found id: ""
	I0617 12:02:16.569179  166103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:02:16.580328  166103 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:02:16.580353  166103 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:02:16.580360  166103 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:02:16.580409  166103 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:02:16.591277  166103 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:02:16.592450  166103 kubeconfig.go:125] found "default-k8s-diff-port-991309" server: "https://192.168.50.125:8444"
	I0617 12:02:16.594770  166103 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:02:16.605669  166103 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.125
	I0617 12:02:16.605728  166103 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:02:16.605745  166103 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:02:16.605810  166103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:16.654529  166103 cri.go:89] found id: ""
	I0617 12:02:16.654620  166103 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:02:16.672923  166103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:02:16.683485  166103 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:02:16.683514  166103 kubeadm.go:156] found existing configuration files:
	
	I0617 12:02:16.683576  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0617 12:02:16.693533  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:02:16.693614  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:02:16.703670  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0617 12:02:16.716352  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:02:16.716413  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:02:16.729336  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0617 12:02:16.739183  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:02:16.739249  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:02:16.748978  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0617 12:02:16.758195  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:02:16.758262  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:02:16.767945  166103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:02:16.777773  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:16.919605  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:13.126836  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:13.626460  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.127261  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.627161  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:15.126580  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:15.627082  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:16.127163  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:16.626524  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:17.126469  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:17.626488  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.728717  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:17.225452  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:13.629097  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:13.629723  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:13.629826  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:13.629705  166976 retry.go:31] will retry after 588.18471ms: waiting for machine to come up
	I0617 12:02:14.219111  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:14.219672  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:14.219704  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:14.219611  166976 retry.go:31] will retry after 889.359727ms: waiting for machine to come up
	I0617 12:02:15.110916  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:15.111528  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:15.111559  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:15.111473  166976 retry.go:31] will retry after 1.139454059s: waiting for machine to come up
	I0617 12:02:16.252051  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:16.252601  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:16.252636  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:16.252534  166976 retry.go:31] will retry after 1.189357648s: waiting for machine to come up
	I0617 12:02:17.443845  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:17.444370  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:17.444403  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:17.444310  166976 retry.go:31] will retry after 1.614769478s: waiting for machine to come up
	I0617 12:02:18.068811  166103 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.149162388s)
	I0617 12:02:18.068870  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:18.301209  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:18.362153  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:18.454577  166103 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:18.454674  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:18.954929  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.454795  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.505453  166103 api_server.go:72] duration metric: took 1.050874914s to wait for apiserver process to appear ...
	I0617 12:02:19.505490  166103 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:02:19.505518  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:19.506056  166103 api_server.go:269] stopped: https://192.168.50.125:8444/healthz: Get "https://192.168.50.125:8444/healthz": dial tcp 192.168.50.125:8444: connect: connection refused
	I0617 12:02:20.005681  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:22.216162  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:22.216214  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:22.216234  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:22.239561  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:22.239635  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:18.126897  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:18.627145  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.126724  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.626498  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:20.126389  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:20.627190  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:21.126480  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:21.627210  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:22.127273  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:22.626691  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.227344  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:21.725689  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:19.061035  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:19.061555  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:19.061588  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:19.061520  166976 retry.go:31] will retry after 2.385838312s: waiting for machine to come up
	I0617 12:02:21.448745  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:21.449239  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:21.449266  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:21.449208  166976 retry.go:31] will retry after 3.308788046s: waiting for machine to come up
	I0617 12:02:22.505636  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:22.509888  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:22.509916  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:23.006285  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:23.011948  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:23.011983  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:23.505640  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:23.510358  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 200:
	ok
	I0617 12:02:23.516663  166103 api_server.go:141] control plane version: v1.30.1
	I0617 12:02:23.516686  166103 api_server.go:131] duration metric: took 4.011188976s to wait for apiserver health ...
	I0617 12:02:23.516694  166103 cni.go:84] Creating CNI manager for ""
	I0617 12:02:23.516700  166103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:23.518498  166103 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:02:23.519722  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:02:23.530145  166103 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:02:23.552805  166103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:02:23.564825  166103 system_pods.go:59] 8 kube-system pods found
	I0617 12:02:23.564853  166103 system_pods.go:61] "coredns-7db6d8ff4d-mnw24" [1e6c4ff3-f0dc-43da-abd8-baaed7dca40c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0617 12:02:23.564863  166103 system_pods.go:61] "etcd-default-k8s-diff-port-991309" [820a4f27-cf83-4edb-a2ea-edba6673d851] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0617 12:02:23.564871  166103 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991309" [26e6c19d-6f70-4924-83f5-563c8508c9e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 12:02:23.564877  166103 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991309" [01e7c468-98a6-48f3-a158-59e97fa8279c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 12:02:23.564885  166103 system_pods.go:61] "kube-proxy-jn5kp" [d6935148-7ee8-4655-8327-9f1ee4c933de] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0617 12:02:23.564894  166103 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991309" [53ecd22c-05cf-48a5-b7e5-925392085f7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 12:02:23.564899  166103 system_pods.go:61] "metrics-server-569cc877fc-n2svp" [5b637d97-3183-4324-98cf-dd69a2968578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:02:23.564908  166103 system_pods.go:61] "storage-provisioner" [92b20aec-29c2-4256-86be-7f58f66585dd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 12:02:23.564913  166103 system_pods.go:74] duration metric: took 12.089276ms to wait for pod list to return data ...
	I0617 12:02:23.564919  166103 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:02:23.573455  166103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:02:23.573480  166103 node_conditions.go:123] node cpu capacity is 2
	I0617 12:02:23.573492  166103 node_conditions.go:105] duration metric: took 8.568721ms to run NodePressure ...
	I0617 12:02:23.573509  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:23.918292  166103 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 12:02:23.922992  166103 kubeadm.go:733] kubelet initialised
	I0617 12:02:23.923019  166103 kubeadm.go:734] duration metric: took 4.69627ms waiting for restarted kubelet to initialise ...
	I0617 12:02:23.923027  166103 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:23.927615  166103 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.932203  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.932225  166103 pod_ready.go:81] duration metric: took 4.590359ms for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.932233  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.932239  166103 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.936802  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.936825  166103 pod_ready.go:81] duration metric: took 4.579036ms for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.936835  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.936840  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.942877  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.942903  166103 pod_ready.go:81] duration metric: took 6.055748ms for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.942927  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.942935  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.955830  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.955851  166103 pod_ready.go:81] duration metric: took 12.903911ms for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.955861  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.955869  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:24.356654  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-proxy-jn5kp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.356682  166103 pod_ready.go:81] duration metric: took 400.805294ms for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:24.356692  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-proxy-jn5kp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.356699  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:24.765108  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.765133  166103 pod_ready.go:81] duration metric: took 408.42568ms for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:24.765145  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.765152  166103 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:25.156898  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:25.156927  166103 pod_ready.go:81] duration metric: took 391.769275ms for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:25.156939  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:25.156946  166103 pod_ready.go:38] duration metric: took 1.233911476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:25.156968  166103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:02:25.170925  166103 ops.go:34] apiserver oom_adj: -16
	I0617 12:02:25.170963  166103 kubeadm.go:591] duration metric: took 8.590593327s to restartPrimaryControlPlane
	I0617 12:02:25.170976  166103 kubeadm.go:393] duration metric: took 8.652716269s to StartCluster
	I0617 12:02:25.170998  166103 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:25.171111  166103 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:02:25.173919  166103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:25.174286  166103 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:02:25.176186  166103 out.go:177] * Verifying Kubernetes components...
	I0617 12:02:25.174347  166103 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:02:25.174528  166103 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:02:25.177622  166103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:25.177632  166103 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-991309"
	I0617 12:02:25.177670  166103 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-991309"
	W0617 12:02:25.177684  166103 addons.go:243] addon metrics-server should already be in state true
	I0617 12:02:25.177721  166103 host.go:66] Checking if "default-k8s-diff-port-991309" exists ...
	I0617 12:02:25.177622  166103 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-991309"
	I0617 12:02:25.177789  166103 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-991309"
	W0617 12:02:25.177806  166103 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:02:25.177837  166103 host.go:66] Checking if "default-k8s-diff-port-991309" exists ...
	I0617 12:02:25.177628  166103 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-991309"
	I0617 12:02:25.177875  166103 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-991309"
	I0617 12:02:25.178173  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.178202  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.178251  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.178282  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.178299  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.178318  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.198817  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0617 12:02:25.199064  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0617 12:02:25.199513  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I0617 12:02:25.199902  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.199919  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.200633  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.201080  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.201110  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.201270  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.201286  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.201415  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.201427  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.201482  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.201786  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.201845  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.202268  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.202637  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.202663  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.202989  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.203038  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.206439  166103 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-991309"
	W0617 12:02:25.206462  166103 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:02:25.206492  166103 host.go:66] Checking if "default-k8s-diff-port-991309" exists ...
	I0617 12:02:25.206875  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.206921  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.218501  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37189
	I0617 12:02:25.218532  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34089
	I0617 12:02:25.218912  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.218986  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.219410  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.219429  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.219545  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.219561  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.219917  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.219920  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.220110  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.220111  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.221839  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:25.223920  166103 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:02:25.225213  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:02:25.225232  166103 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:02:25.225260  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:25.224029  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:25.228780  166103 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:25.227545  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0617 12:02:25.230084  166103 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:02:25.230100  166103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:02:25.230113  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:25.228465  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.229054  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:25.230179  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:25.229303  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.230215  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.230371  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:25.230542  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:25.230674  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:25.230723  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.230737  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.231150  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.231772  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.231802  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.234036  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.234476  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:25.234494  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.234755  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:25.234919  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:25.235079  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:25.235235  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:25.248352  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46349
	I0617 12:02:25.248851  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.249306  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.249330  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.249681  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.249873  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.251282  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:25.251512  166103 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:02:25.251529  166103 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:02:25.251551  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:25.253963  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.254458  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:25.254484  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.254628  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:25.254941  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:25.255229  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:25.255385  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:25.391207  166103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:02:25.411906  166103 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-991309" to be "Ready" ...
	I0617 12:02:25.476025  166103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:02:25.566470  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:02:25.566500  166103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:02:25.593744  166103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:02:25.620336  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:02:25.620371  166103 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:02:25.700009  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:02:25.700048  166103 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:02:25.769841  166103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:02:25.782207  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:25.782240  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:25.782576  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Closing plugin on server side
	I0617 12:02:25.782597  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:25.782610  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:25.782623  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:25.782632  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:25.782888  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:25.782916  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:25.789639  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:25.789662  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:25.789921  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:25.789941  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.600819  166103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.007014283s)
	I0617 12:02:26.600883  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.600898  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.600902  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.600917  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.601253  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601295  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.601305  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.601325  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601342  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.601353  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.601366  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.601370  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.601571  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Closing plugin on server side
	I0617 12:02:26.601590  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601600  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.601615  166103 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-991309"
	I0617 12:02:26.601626  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601635  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Closing plugin on server side
	I0617 12:02:26.601638  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.604200  166103 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0617 12:02:26.605477  166103 addons.go:510] duration metric: took 1.431148263s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0617 12:02:27.415122  166103 node_ready.go:53] node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.126888  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:23.627274  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.127019  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.627337  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:25.126642  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:25.627064  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:26.126606  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:26.626803  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:27.126825  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:27.626799  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.223344  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:26.225129  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:24.760577  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:24.761063  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:24.761095  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:24.760999  166976 retry.go:31] will retry after 3.793168135s: waiting for machine to come up
	I0617 12:02:28.558153  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.558708  164809 main.go:141] libmachine: (no-preload-152830) Found IP for machine: 192.168.39.173
	I0617 12:02:28.558735  164809 main.go:141] libmachine: (no-preload-152830) Reserving static IP address...
	I0617 12:02:28.558751  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has current primary IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.559214  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "no-preload-152830", mac: "52:54:00:c0:1a:fb", ip: "192.168.39.173"} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.559248  164809 main.go:141] libmachine: (no-preload-152830) DBG | skip adding static IP to network mk-no-preload-152830 - found existing host DHCP lease matching {name: "no-preload-152830", mac: "52:54:00:c0:1a:fb", ip: "192.168.39.173"}
	I0617 12:02:28.559263  164809 main.go:141] libmachine: (no-preload-152830) Reserved static IP address: 192.168.39.173
	I0617 12:02:28.559278  164809 main.go:141] libmachine: (no-preload-152830) Waiting for SSH to be available...
	I0617 12:02:28.559295  164809 main.go:141] libmachine: (no-preload-152830) DBG | Getting to WaitForSSH function...
	I0617 12:02:28.562122  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.562453  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.562482  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.562678  164809 main.go:141] libmachine: (no-preload-152830) DBG | Using SSH client type: external
	I0617 12:02:28.562706  164809 main.go:141] libmachine: (no-preload-152830) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa (-rw-------)
	I0617 12:02:28.562739  164809 main.go:141] libmachine: (no-preload-152830) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.173 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:02:28.562753  164809 main.go:141] libmachine: (no-preload-152830) DBG | About to run SSH command:
	I0617 12:02:28.562770  164809 main.go:141] libmachine: (no-preload-152830) DBG | exit 0
	I0617 12:02:28.687683  164809 main.go:141] libmachine: (no-preload-152830) DBG | SSH cmd err, output: <nil>: 
	I0617 12:02:28.688021  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetConfigRaw
	I0617 12:02:28.688649  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:28.691248  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.691585  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.691609  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.691895  164809 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/config.json ...
	I0617 12:02:28.692109  164809 machine.go:94] provisionDockerMachine start ...
	I0617 12:02:28.692132  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:28.692371  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:28.694371  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.694738  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.694766  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.694942  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:28.695130  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.695309  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.695490  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:28.695695  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:28.695858  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:28.695869  164809 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:02:28.803687  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:02:28.803726  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:02:28.803996  164809 buildroot.go:166] provisioning hostname "no-preload-152830"
	I0617 12:02:28.804031  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:02:28.804333  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:28.806959  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.807395  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.807424  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.807547  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:28.807725  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.807895  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.808057  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:28.808216  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:28.808420  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:28.808436  164809 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-152830 && echo "no-preload-152830" | sudo tee /etc/hostname
	I0617 12:02:28.931222  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-152830
	
	I0617 12:02:28.931259  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:28.934188  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.934536  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.934564  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.934822  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:28.935048  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.935218  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.935353  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:28.935593  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:28.935814  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:28.935837  164809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-152830' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-152830/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-152830' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:02:29.054126  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:02:29.054156  164809 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:02:29.054173  164809 buildroot.go:174] setting up certificates
	I0617 12:02:29.054184  164809 provision.go:84] configureAuth start
	I0617 12:02:29.054195  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:02:29.054490  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:29.057394  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.057797  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.057830  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.057963  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.060191  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.060485  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.060514  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.060633  164809 provision.go:143] copyHostCerts
	I0617 12:02:29.060708  164809 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:02:29.060722  164809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:02:29.060796  164809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:02:29.060963  164809 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:02:29.060978  164809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:02:29.061003  164809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:02:29.061065  164809 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:02:29.061072  164809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:02:29.061090  164809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:02:29.061139  164809 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.no-preload-152830 san=[127.0.0.1 192.168.39.173 localhost minikube no-preload-152830]
	I0617 12:02:29.321179  164809 provision.go:177] copyRemoteCerts
	I0617 12:02:29.321232  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:02:29.321256  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.324217  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.324612  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.324642  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.324836  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.325043  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.325227  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.325386  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:29.410247  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:02:29.435763  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0617 12:02:29.462900  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:02:29.491078  164809 provision.go:87] duration metric: took 436.876068ms to configureAuth
	I0617 12:02:29.491120  164809 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:02:29.491377  164809 config.go:182] Loaded profile config "no-preload-152830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:02:29.491522  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.494581  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.495019  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.495052  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.495245  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.495555  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.495766  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.495897  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.496068  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:29.496275  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:29.496296  164809 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:02:29.774692  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:02:29.774730  164809 machine.go:97] duration metric: took 1.082604724s to provisionDockerMachine
	I0617 12:02:29.774748  164809 start.go:293] postStartSetup for "no-preload-152830" (driver="kvm2")
	I0617 12:02:29.774765  164809 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:02:29.774785  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:29.775181  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:02:29.775220  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.778574  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.778959  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.778988  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.779154  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.779351  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.779575  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.779750  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:29.866959  164809 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:02:29.871319  164809 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:02:29.871348  164809 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:02:29.871425  164809 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:02:29.871535  164809 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:02:29.871648  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:02:29.881995  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:29.907614  164809 start.go:296] duration metric: took 132.84708ms for postStartSetup
	I0617 12:02:29.907669  164809 fix.go:56] duration metric: took 19.690465972s for fixHost
	I0617 12:02:29.907695  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.910226  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.910617  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.910644  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.910811  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.911162  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.911377  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.911571  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.911772  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:29.911961  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:29.911972  164809 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:02:30.021051  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625749.993041026
	
	I0617 12:02:30.021079  164809 fix.go:216] guest clock: 1718625749.993041026
	I0617 12:02:30.021088  164809 fix.go:229] Guest: 2024-06-17 12:02:29.993041026 +0000 UTC Remote: 2024-06-17 12:02:29.907674102 +0000 UTC m=+356.579226401 (delta=85.366924ms)
	I0617 12:02:30.021113  164809 fix.go:200] guest clock delta is within tolerance: 85.366924ms
	I0617 12:02:30.021120  164809 start.go:83] releasing machines lock for "no-preload-152830", held for 19.803953246s
	I0617 12:02:30.021148  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.021403  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:30.024093  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.024600  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:30.024633  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.024830  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.025380  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.025552  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.025623  164809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:02:30.025668  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:30.025767  164809 ssh_runner.go:195] Run: cat /version.json
	I0617 12:02:30.025798  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:30.028656  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.028826  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.029037  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:30.029068  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.029294  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:30.029336  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:30.029366  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.029528  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:30.029536  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:30.029764  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:30.029776  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:30.029957  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:30.029984  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:30.030161  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:30.135901  164809 ssh_runner.go:195] Run: systemctl --version
	I0617 12:02:30.142668  164809 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:02:30.296485  164809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:02:30.302789  164809 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:02:30.302856  164809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:02:30.319775  164809 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:02:30.319793  164809 start.go:494] detecting cgroup driver to use...
	I0617 12:02:30.319894  164809 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:02:30.335498  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:02:30.349389  164809 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:02:30.349427  164809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:02:30.363086  164809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:02:30.377383  164809 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:02:30.499956  164809 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:02:30.644098  164809 docker.go:233] disabling docker service ...
	I0617 12:02:30.644178  164809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:02:30.661490  164809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:02:30.675856  164809 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:02:30.819937  164809 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:02:30.932926  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:02:30.947638  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:02:30.966574  164809 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:02:30.966648  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:30.978339  164809 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:02:30.978416  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:30.989950  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.000644  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.011280  164809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:02:31.022197  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.032780  164809 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.050053  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.062065  164809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:02:31.073296  164809 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:02:31.073368  164809 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:02:31.087733  164809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:02:31.098019  164809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:31.232495  164809 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:02:31.371236  164809 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:02:31.371312  164809 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:02:31.376442  164809 start.go:562] Will wait 60s for crictl version
	I0617 12:02:31.376522  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.380416  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:02:31.426664  164809 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:02:31.426763  164809 ssh_runner.go:195] Run: crio --version
	I0617 12:02:31.456696  164809 ssh_runner.go:195] Run: crio --version
	I0617 12:02:31.487696  164809 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:02:29.416369  166103 node_ready.go:53] node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:31.417357  166103 node_ready.go:53] node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:28.126854  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:28.627278  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:29.126577  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:29.626475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:30.127193  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:30.627229  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:31.126478  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:31.626336  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:32.126398  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:32.627005  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:28.724801  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:30.726589  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:33.225707  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:31.488972  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:31.491812  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:31.492191  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:31.492220  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:31.492411  164809 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 12:02:31.497100  164809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:31.510949  164809 kubeadm.go:877] updating cluster {Name:no-preload-152830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-152830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:02:31.511079  164809 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:02:31.511114  164809 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:02:31.546350  164809 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:02:31.546377  164809 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 12:02:31.546440  164809 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.546452  164809 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.546478  164809 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.546485  164809 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.546513  164809 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.546513  164809 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.546458  164809 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.546569  164809 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0617 12:02:31.548101  164809 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.548123  164809 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.548123  164809 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.548137  164809 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.548101  164809 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.548104  164809 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0617 12:02:31.548103  164809 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.548427  164809 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.714107  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.714819  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0617 12:02:31.715764  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.721844  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.722172  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.739873  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.746705  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.814194  164809 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0617 12:02:31.814235  164809 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.814273  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.849549  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.950803  164809 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0617 12:02:31.950858  164809 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.950907  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.950934  164809 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0617 12:02:31.950959  164809 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.950992  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951005  164809 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0617 12:02:31.951030  164809 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0617 12:02:31.951053  164809 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.951090  164809 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0617 12:02:31.951103  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951113  164809 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.951146  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951053  164809 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.951179  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951217  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.951266  164809 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0617 12:02:31.951289  164809 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.951319  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.967596  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.967802  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:32.018505  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:32.018542  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:32.018623  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:32.018664  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0617 12:02:32.018738  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:32.018755  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0617 12:02:32.026154  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0617 12:02:32.026270  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0617 12:02:32.046161  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0617 12:02:32.046288  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0617 12:02:32.126665  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0617 12:02:32.126755  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0617 12:02:32.126765  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0617 12:02:32.126814  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0617 12:02:32.126829  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0617 12:02:32.126867  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0617 12:02:32.126898  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0617 12:02:32.126911  164809 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0617 12:02:32.126935  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0617 12:02:32.126965  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0617 12:02:32.127008  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0617 12:02:32.127058  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0617 12:02:32.127060  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0617 12:02:32.142790  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0617 12:02:32.142816  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0617 12:02:32.143132  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0617 12:02:32.915885  166103 node_ready.go:49] node "default-k8s-diff-port-991309" has status "Ready":"True"
	I0617 12:02:32.915912  166103 node_ready.go:38] duration metric: took 7.503979113s for node "default-k8s-diff-port-991309" to be "Ready" ...
	I0617 12:02:32.915924  166103 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:32.921198  166103 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:34.927290  166103 pod_ready.go:102] pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:33.126753  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:33.627017  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:34.126558  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:34.626976  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.126410  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.627309  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:36.126958  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:36.626349  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:37.126815  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:37.627332  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.724326  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:37.725145  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:36.125679  164809 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1: (3.998551072s)
	I0617 12:02:36.125727  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0617 12:02:36.125773  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.998809852s)
	I0617 12:02:36.125804  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0617 12:02:36.125838  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0617 12:02:36.125894  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0617 12:02:37.885028  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.759100554s)
	I0617 12:02:37.885054  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0617 12:02:37.885073  164809 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0617 12:02:37.885122  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0617 12:02:37.429419  166103 pod_ready.go:102] pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:39.933476  166103 pod_ready.go:92] pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.933508  166103 pod_ready.go:81] duration metric: took 7.012285571s for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.933521  166103 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.940139  166103 pod_ready.go:92] pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.940162  166103 pod_ready.go:81] duration metric: took 6.633405ms for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.940175  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.945285  166103 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.945305  166103 pod_ready.go:81] duration metric: took 5.12303ms for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.945317  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.950992  166103 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.951021  166103 pod_ready.go:81] duration metric: took 5.6962ms for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.951034  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.955874  166103 pod_ready.go:92] pod "kube-proxy-jn5kp" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.955894  166103 pod_ready.go:81] duration metric: took 4.852842ms for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.955905  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:40.327000  166103 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:40.327035  166103 pod_ready.go:81] duration metric: took 371.121545ms for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:40.327049  166103 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:42.334620  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:38.126868  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:38.627367  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.127148  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.626571  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:40.126379  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:40.626747  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:41.126485  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:41.626372  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:42.126904  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:42.627293  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.727666  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:42.223700  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:39.992863  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.10770953s)
	I0617 12:02:39.992903  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0617 12:02:39.992934  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0617 12:02:39.992989  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0617 12:02:41.851420  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.858400961s)
	I0617 12:02:41.851452  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0617 12:02:41.851508  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0617 12:02:41.851578  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0617 12:02:44.833842  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:46.834443  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:43.127137  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:43.626521  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.127017  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.626824  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:45.126475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:45.626535  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:46.127423  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:46.626605  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:47.127029  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:47.627431  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.224685  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:46.225071  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:44.211669  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.360046418s)
	I0617 12:02:44.211702  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0617 12:02:44.211726  164809 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0617 12:02:44.211795  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0617 12:02:45.162389  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0617 12:02:45.162456  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0617 12:02:45.162542  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0617 12:02:47.414088  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.251500525s)
	I0617 12:02:47.414130  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0617 12:02:47.414164  164809 cache_images.go:123] Successfully loaded all cached images
	I0617 12:02:47.414172  164809 cache_images.go:92] duration metric: took 15.867782566s to LoadCachedImages
	I0617 12:02:47.414195  164809 kubeadm.go:928] updating node { 192.168.39.173 8443 v1.30.1 crio true true} ...
	I0617 12:02:47.414359  164809 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-152830 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-152830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:02:47.414451  164809 ssh_runner.go:195] Run: crio config
	I0617 12:02:47.466472  164809 cni.go:84] Creating CNI manager for ""
	I0617 12:02:47.466493  164809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:47.466503  164809 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:02:47.466531  164809 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.173 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-152830 NodeName:no-preload-152830 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:02:47.466716  164809 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-152830"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.173"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:02:47.466793  164809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:02:47.478163  164809 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:02:47.478255  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:02:47.488014  164809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0617 12:02:47.505143  164809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:02:47.522481  164809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0617 12:02:47.545714  164809 ssh_runner.go:195] Run: grep 192.168.39.173	control-plane.minikube.internal$ /etc/hosts
	I0617 12:02:47.551976  164809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.173	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:47.565374  164809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:47.694699  164809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:02:47.714017  164809 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830 for IP: 192.168.39.173
	I0617 12:02:47.714044  164809 certs.go:194] generating shared ca certs ...
	I0617 12:02:47.714064  164809 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:47.714260  164809 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:02:47.714321  164809 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:02:47.714335  164809 certs.go:256] generating profile certs ...
	I0617 12:02:47.714419  164809 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/client.key
	I0617 12:02:47.714504  164809 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/apiserver.key.d2d5b47b
	I0617 12:02:47.714547  164809 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/proxy-client.key
	I0617 12:02:47.714655  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:02:47.714684  164809 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:02:47.714693  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:02:47.714719  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:02:47.714745  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:02:47.714780  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:02:47.714815  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:47.715578  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:02:47.767301  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:02:47.804542  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:02:47.842670  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:02:47.874533  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0617 12:02:47.909752  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:02:47.940097  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:02:47.965441  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:02:47.990862  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:02:48.015935  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:02:48.041408  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:02:48.066557  164809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:02:48.084630  164809 ssh_runner.go:195] Run: openssl version
	I0617 12:02:48.091098  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:02:48.102447  164809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:02:48.107238  164809 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:02:48.107299  164809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:02:48.113682  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:02:48.124472  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:02:48.135897  164809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:02:48.140859  164809 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:02:48.140915  164809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:02:48.147113  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:02:48.158192  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:02:48.169483  164809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:48.174241  164809 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:48.174294  164809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:48.180093  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:02:48.191082  164809 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:02:48.195770  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:02:48.201743  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:02:48.207452  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:02:48.213492  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:02:48.219435  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:02:48.226202  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:02:48.232291  164809 kubeadm.go:391] StartCluster: {Name:no-preload-152830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-152830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:02:48.232409  164809 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:02:48.232448  164809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:48.272909  164809 cri.go:89] found id: ""
	I0617 12:02:48.272972  164809 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:02:48.284185  164809 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:02:48.284212  164809 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:02:48.284221  164809 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:02:48.284266  164809 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:02:48.294653  164809 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:02:48.296091  164809 kubeconfig.go:125] found "no-preload-152830" server: "https://192.168.39.173:8443"
	I0617 12:02:48.298438  164809 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:02:48.307905  164809 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.173
	I0617 12:02:48.307932  164809 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:02:48.307945  164809 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:02:48.307990  164809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:48.356179  164809 cri.go:89] found id: ""
	I0617 12:02:48.356247  164809 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:02:49.333637  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:51.333927  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:48.127215  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:48.627013  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:49.126439  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:49.626831  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.126521  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.627178  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:51.126830  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:51.627091  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:52.127343  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:52.626635  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:48.724828  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:51.225321  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:48.377824  164809 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:02:48.389213  164809 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:02:48.389236  164809 kubeadm.go:156] found existing configuration files:
	
	I0617 12:02:48.389287  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:02:48.398559  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:02:48.398605  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:02:48.408243  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:02:48.417407  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:02:48.417451  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:02:48.427333  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:02:48.436224  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:02:48.436278  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:02:48.445378  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:02:48.454119  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:02:48.454170  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:02:48.463097  164809 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:02:48.472479  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:48.584018  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.392310  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.599840  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.662845  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.794357  164809 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:49.794459  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.295507  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.794968  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.832967  164809 api_server.go:72] duration metric: took 1.038610813s to wait for apiserver process to appear ...
	I0617 12:02:50.832993  164809 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:02:50.833017  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:50.833494  164809 api_server.go:269] stopped: https://192.168.39.173:8443/healthz: Get "https://192.168.39.173:8443/healthz": dial tcp 192.168.39.173:8443: connect: connection refused
	I0617 12:02:51.333910  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:53.534213  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:53.534246  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:53.534265  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:53.579857  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:53.579887  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:53.833207  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:53.863430  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:53.863485  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:54.333557  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:54.342474  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:54.342507  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:54.834092  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:54.839578  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0617 12:02:54.854075  164809 api_server.go:141] control plane version: v1.30.1
	I0617 12:02:54.854113  164809 api_server.go:131] duration metric: took 4.021112065s to wait for apiserver health ...
	I0617 12:02:54.854124  164809 cni.go:84] Creating CNI manager for ""
	I0617 12:02:54.854133  164809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:54.856029  164809 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:02:53.334898  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:55.834490  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:53.126693  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:53.627110  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:54.126653  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:54.626424  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:55.127113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:55.627373  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:56.126415  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:56.627329  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:57.126797  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:57.627313  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:53.723948  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:56.225000  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:54.857252  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:02:54.914636  164809 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:02:54.961745  164809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:02:54.975140  164809 system_pods.go:59] 8 kube-system pods found
	I0617 12:02:54.975183  164809 system_pods.go:61] "coredns-7db6d8ff4d-7lfns" [83cf7962-1aa7-4de6-9e77-a03dee972ead] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0617 12:02:54.975192  164809 system_pods.go:61] "etcd-no-preload-152830" [27dace2b-9d7d-44e8-8f86-b20ce49c8afa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0617 12:02:54.975202  164809 system_pods.go:61] "kube-apiserver-no-preload-152830" [c102caaf-2289-4171-8b1f-89df4f6edf39] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 12:02:54.975213  164809 system_pods.go:61] "kube-controller-manager-no-preload-152830" [534a8f45-7886-4e12-b728-df686c2f8668] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 12:02:54.975220  164809 system_pods.go:61] "kube-proxy-bblgc" [70fa474e-cb6a-4e31-b978-78b47e9952a8] Running
	I0617 12:02:54.975228  164809 system_pods.go:61] "kube-scheduler-no-preload-152830" [17d696bd-55b3-4080-a63d-944216adf1d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 12:02:54.975240  164809 system_pods.go:61] "metrics-server-569cc877fc-97tqn" [0ce37c88-fd22-4001-96c4-d0f5239c0fd4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:02:54.975253  164809 system_pods.go:61] "storage-provisioner" [61dafb85-965b-4961-b9e1-e3202795caef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 12:02:54.975268  164809 system_pods.go:74] duration metric: took 13.492652ms to wait for pod list to return data ...
	I0617 12:02:54.975279  164809 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:02:54.980820  164809 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:02:54.980842  164809 node_conditions.go:123] node cpu capacity is 2
	I0617 12:02:54.980854  164809 node_conditions.go:105] duration metric: took 5.568037ms to run NodePressure ...
	I0617 12:02:54.980873  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:55.284669  164809 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 12:02:55.289433  164809 kubeadm.go:733] kubelet initialised
	I0617 12:02:55.289453  164809 kubeadm.go:734] duration metric: took 4.759785ms waiting for restarted kubelet to initialise ...
	I0617 12:02:55.289461  164809 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:55.294149  164809 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:55.298081  164809 pod_ready.go:97] node "no-preload-152830" hosting pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.298100  164809 pod_ready.go:81] duration metric: took 3.929974ms for pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:55.298109  164809 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-152830" hosting pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.298116  164809 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:55.302552  164809 pod_ready.go:97] node "no-preload-152830" hosting pod "etcd-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.302572  164809 pod_ready.go:81] duration metric: took 4.444579ms for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:55.302580  164809 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-152830" hosting pod "etcd-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.302585  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:55.306375  164809 pod_ready.go:97] node "no-preload-152830" hosting pod "kube-apiserver-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.306394  164809 pod_ready.go:81] duration metric: took 3.804134ms for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:55.306402  164809 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-152830" hosting pod "kube-apiserver-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.306407  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:57.313002  164809 pod_ready.go:102] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:57.834719  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:00.334129  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:58.126744  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:58.627050  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:59.127300  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:59.626694  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:00.127092  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:00.127182  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:00.166116  165698 cri.go:89] found id: ""
	I0617 12:03:00.166145  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.166153  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:00.166159  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:00.166208  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:00.200990  165698 cri.go:89] found id: ""
	I0617 12:03:00.201020  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.201029  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:00.201034  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:00.201086  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:00.236394  165698 cri.go:89] found id: ""
	I0617 12:03:00.236422  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.236430  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:00.236438  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:00.236496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:00.274257  165698 cri.go:89] found id: ""
	I0617 12:03:00.274285  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.274293  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:00.274299  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:00.274350  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:00.307425  165698 cri.go:89] found id: ""
	I0617 12:03:00.307452  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.307481  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:00.307490  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:00.307557  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:00.343420  165698 cri.go:89] found id: ""
	I0617 12:03:00.343446  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.343472  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:00.343480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:00.343541  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:00.378301  165698 cri.go:89] found id: ""
	I0617 12:03:00.378325  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.378333  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:00.378338  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:00.378383  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:00.414985  165698 cri.go:89] found id: ""
	I0617 12:03:00.415011  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.415018  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:00.415033  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:00.415090  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:00.468230  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:00.468262  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:00.481970  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:00.481998  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:00.612881  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:00.612911  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:00.612929  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:00.676110  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:00.676145  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:02:58.725617  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:01.225227  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:59.818063  164809 pod_ready.go:102] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:02.312898  164809 pod_ready.go:102] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:03.313300  164809 pod_ready.go:92] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:03:03.313332  164809 pod_ready.go:81] duration metric: took 8.006915719s for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:03.313347  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bblgc" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:03.319094  164809 pod_ready.go:92] pod "kube-proxy-bblgc" in "kube-system" namespace has status "Ready":"True"
	I0617 12:03:03.319116  164809 pod_ready.go:81] duration metric: took 5.762584ms for pod "kube-proxy-bblgc" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:03.319137  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:02.833031  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:04.834158  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:07.334894  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:03.216960  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:03.231208  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:03.231277  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:03.267056  165698 cri.go:89] found id: ""
	I0617 12:03:03.267088  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.267096  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:03.267103  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:03.267152  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:03.302797  165698 cri.go:89] found id: ""
	I0617 12:03:03.302832  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.302844  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:03.302852  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:03.302905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:03.343401  165698 cri.go:89] found id: ""
	I0617 12:03:03.343435  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.343445  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:03.343465  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:03.343530  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:03.380841  165698 cri.go:89] found id: ""
	I0617 12:03:03.380871  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.380883  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:03.380890  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:03.380951  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:03.420098  165698 cri.go:89] found id: ""
	I0617 12:03:03.420130  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.420142  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:03.420150  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:03.420213  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:03.458476  165698 cri.go:89] found id: ""
	I0617 12:03:03.458506  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.458515  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:03.458521  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:03.458586  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:03.497127  165698 cri.go:89] found id: ""
	I0617 12:03:03.497156  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.497164  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:03.497170  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:03.497217  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:03.538759  165698 cri.go:89] found id: ""
	I0617 12:03:03.538794  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.538806  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:03.538825  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:03.538841  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:03.584701  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:03.584743  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:03.636981  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:03.637030  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:03.670032  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:03.670077  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:03.757012  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:03.757038  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:03.757056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:06.327680  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:06.341998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:06.342068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:06.383353  165698 cri.go:89] found id: ""
	I0617 12:03:06.383385  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.383394  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:06.383400  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:06.383448  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:06.418806  165698 cri.go:89] found id: ""
	I0617 12:03:06.418850  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.418862  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:06.418870  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:06.418945  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:06.458151  165698 cri.go:89] found id: ""
	I0617 12:03:06.458192  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.458204  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:06.458219  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:06.458289  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:06.496607  165698 cri.go:89] found id: ""
	I0617 12:03:06.496637  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.496645  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:06.496651  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:06.496703  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:06.534900  165698 cri.go:89] found id: ""
	I0617 12:03:06.534938  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.534951  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:06.534959  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:06.535017  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:06.572388  165698 cri.go:89] found id: ""
	I0617 12:03:06.572413  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.572422  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:06.572428  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:06.572496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:06.608072  165698 cri.go:89] found id: ""
	I0617 12:03:06.608104  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.608115  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:06.608121  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:06.608175  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:06.647727  165698 cri.go:89] found id: ""
	I0617 12:03:06.647760  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.647772  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:06.647784  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:06.647800  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:06.720887  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:06.720919  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:06.761128  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:06.761153  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:06.815524  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:06.815557  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:06.830275  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:06.830304  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:06.907861  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:03.725650  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:06.225601  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:05.327062  164809 pod_ready.go:102] pod "kube-scheduler-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:07.325033  164809 pod_ready.go:92] pod "kube-scheduler-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:03:07.325061  164809 pod_ready.go:81] duration metric: took 4.005914462s for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:07.325072  164809 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:09.835374  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:12.334481  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:09.408117  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:09.420916  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:09.420978  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:09.453830  165698 cri.go:89] found id: ""
	I0617 12:03:09.453860  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.453870  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:09.453878  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:09.453937  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:09.492721  165698 cri.go:89] found id: ""
	I0617 12:03:09.492756  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.492766  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:09.492775  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:09.492849  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:09.530956  165698 cri.go:89] found id: ""
	I0617 12:03:09.530984  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.530995  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:09.531001  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:09.531067  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:09.571534  165698 cri.go:89] found id: ""
	I0617 12:03:09.571564  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.571576  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:09.571584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:09.571646  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:09.609740  165698 cri.go:89] found id: ""
	I0617 12:03:09.609776  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.609788  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:09.609797  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:09.609864  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:09.649958  165698 cri.go:89] found id: ""
	I0617 12:03:09.649998  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.650010  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:09.650020  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:09.650087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:09.706495  165698 cri.go:89] found id: ""
	I0617 12:03:09.706532  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.706544  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:09.706553  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:09.706638  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:09.742513  165698 cri.go:89] found id: ""
	I0617 12:03:09.742541  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.742549  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:09.742559  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:09.742571  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:09.756470  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:09.756502  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:09.840878  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:09.840897  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:09.840913  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:09.922329  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:09.922370  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:09.967536  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:09.967573  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:12.521031  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:12.534507  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:12.534595  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:12.569895  165698 cri.go:89] found id: ""
	I0617 12:03:12.569930  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.569942  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:12.569950  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:12.570005  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:12.606857  165698 cri.go:89] found id: ""
	I0617 12:03:12.606888  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.606900  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:12.606922  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:12.606998  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:12.640781  165698 cri.go:89] found id: ""
	I0617 12:03:12.640807  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.640818  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:12.640826  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:12.640910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:12.674097  165698 cri.go:89] found id: ""
	I0617 12:03:12.674124  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.674134  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:12.674142  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:12.674201  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:12.708662  165698 cri.go:89] found id: ""
	I0617 12:03:12.708689  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.708699  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:12.708707  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:12.708791  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:12.744891  165698 cri.go:89] found id: ""
	I0617 12:03:12.744927  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.744938  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:12.744947  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:12.745010  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:12.778440  165698 cri.go:89] found id: ""
	I0617 12:03:12.778466  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.778474  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:12.778480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:12.778528  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:12.814733  165698 cri.go:89] found id: ""
	I0617 12:03:12.814762  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.814770  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:12.814780  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:12.814820  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:12.887741  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:12.887762  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:12.887775  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:12.968439  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:12.968476  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:08.725485  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:11.224357  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:09.331004  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:11.331666  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:13.332269  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:14.335086  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:16.836397  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:13.008926  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:13.008955  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:13.060432  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:13.060468  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:15.575450  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:15.589178  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:15.589244  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:15.625554  165698 cri.go:89] found id: ""
	I0617 12:03:15.625589  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.625601  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:15.625608  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:15.625668  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:15.659023  165698 cri.go:89] found id: ""
	I0617 12:03:15.659054  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.659066  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:15.659074  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:15.659138  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:15.693777  165698 cri.go:89] found id: ""
	I0617 12:03:15.693803  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.693811  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:15.693817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:15.693875  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:15.729098  165698 cri.go:89] found id: ""
	I0617 12:03:15.729133  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.729141  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:15.729147  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:15.729194  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:15.762639  165698 cri.go:89] found id: ""
	I0617 12:03:15.762668  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.762679  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:15.762687  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:15.762744  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:15.797446  165698 cri.go:89] found id: ""
	I0617 12:03:15.797475  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.797484  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:15.797489  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:15.797537  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:15.832464  165698 cri.go:89] found id: ""
	I0617 12:03:15.832503  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.832513  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:15.832521  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:15.832579  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:15.867868  165698 cri.go:89] found id: ""
	I0617 12:03:15.867898  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.867906  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:15.867916  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:15.867928  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:15.882151  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:15.882181  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:15.946642  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:15.946666  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:15.946682  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:16.027062  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:16.027098  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:16.082704  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:16.082735  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:13.725854  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:16.225670  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:15.333470  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:17.832368  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:19.334102  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:21.334529  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:18.651554  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:18.665096  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:18.665166  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:18.703099  165698 cri.go:89] found id: ""
	I0617 12:03:18.703127  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.703138  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:18.703147  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:18.703210  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:18.737945  165698 cri.go:89] found id: ""
	I0617 12:03:18.737985  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.737997  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:18.738005  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:18.738079  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:18.777145  165698 cri.go:89] found id: ""
	I0617 12:03:18.777172  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.777181  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:18.777187  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:18.777255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:18.813171  165698 cri.go:89] found id: ""
	I0617 12:03:18.813198  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.813207  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:18.813213  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:18.813270  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:18.854459  165698 cri.go:89] found id: ""
	I0617 12:03:18.854490  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.854501  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:18.854510  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:18.854607  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:18.893668  165698 cri.go:89] found id: ""
	I0617 12:03:18.893703  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.893712  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:18.893718  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:18.893796  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:18.928919  165698 cri.go:89] found id: ""
	I0617 12:03:18.928971  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.928983  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:18.928993  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:18.929068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:18.965770  165698 cri.go:89] found id: ""
	I0617 12:03:18.965800  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.965808  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:18.965817  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:18.965829  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:19.020348  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:19.020392  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:19.034815  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:19.034845  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:19.109617  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:19.109643  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:19.109660  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:19.186843  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:19.186890  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:21.732720  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:21.747032  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:21.747113  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:21.789962  165698 cri.go:89] found id: ""
	I0617 12:03:21.789991  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.789999  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:21.790011  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:21.790066  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:21.833865  165698 cri.go:89] found id: ""
	I0617 12:03:21.833903  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.833913  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:21.833921  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:21.833985  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:21.903891  165698 cri.go:89] found id: ""
	I0617 12:03:21.903929  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.903941  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:21.903950  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:21.904020  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:21.941369  165698 cri.go:89] found id: ""
	I0617 12:03:21.941396  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.941407  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:21.941415  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:21.941473  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:21.977767  165698 cri.go:89] found id: ""
	I0617 12:03:21.977797  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.977808  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:21.977817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:21.977880  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:22.016422  165698 cri.go:89] found id: ""
	I0617 12:03:22.016450  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.016463  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:22.016471  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:22.016536  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:22.056871  165698 cri.go:89] found id: ""
	I0617 12:03:22.056904  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.056914  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:22.056922  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:22.056982  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:22.093244  165698 cri.go:89] found id: ""
	I0617 12:03:22.093288  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.093300  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:22.093313  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:22.093331  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:22.144722  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:22.144756  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:22.159047  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:22.159084  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:22.232077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:22.232100  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:22.232112  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:22.308241  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:22.308276  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:18.724648  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:21.224616  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:19.832543  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:21.838952  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:23.834640  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:26.336770  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:24.851740  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:24.866597  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:24.866659  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:24.902847  165698 cri.go:89] found id: ""
	I0617 12:03:24.902879  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.902892  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:24.902900  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:24.902973  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:24.940042  165698 cri.go:89] found id: ""
	I0617 12:03:24.940079  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.940088  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:24.940094  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:24.940150  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:24.975160  165698 cri.go:89] found id: ""
	I0617 12:03:24.975190  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.975202  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:24.975211  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:24.975280  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:25.012618  165698 cri.go:89] found id: ""
	I0617 12:03:25.012649  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.012657  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:25.012663  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:25.012712  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:25.051166  165698 cri.go:89] found id: ""
	I0617 12:03:25.051210  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.051223  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:25.051230  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:25.051309  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:25.090112  165698 cri.go:89] found id: ""
	I0617 12:03:25.090144  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.090156  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:25.090164  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:25.090230  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:25.133258  165698 cri.go:89] found id: ""
	I0617 12:03:25.133285  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.133294  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:25.133301  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:25.133366  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:25.177445  165698 cri.go:89] found id: ""
	I0617 12:03:25.177473  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.177481  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:25.177490  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:25.177505  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:25.250685  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:25.250710  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:25.250727  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:25.335554  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:25.335586  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:25.377058  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:25.377093  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:25.431425  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:25.431471  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:27.945063  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:27.959396  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:27.959469  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:23.725126  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:26.224114  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:28.224895  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:23.840550  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:26.333142  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:28.334577  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:28.337133  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:30.834142  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:27.994554  165698 cri.go:89] found id: ""
	I0617 12:03:27.994582  165698 logs.go:276] 0 containers: []
	W0617 12:03:27.994591  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:27.994598  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:27.994660  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:28.030168  165698 cri.go:89] found id: ""
	I0617 12:03:28.030200  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.030208  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:28.030215  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:28.030263  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:28.066213  165698 cri.go:89] found id: ""
	I0617 12:03:28.066244  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.066255  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:28.066261  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:28.066322  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:28.102855  165698 cri.go:89] found id: ""
	I0617 12:03:28.102880  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.102888  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:28.102894  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:28.102942  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:28.138698  165698 cri.go:89] found id: ""
	I0617 12:03:28.138734  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.138748  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:28.138755  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:28.138815  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:28.173114  165698 cri.go:89] found id: ""
	I0617 12:03:28.173140  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.173148  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:28.173154  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:28.173213  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:28.208901  165698 cri.go:89] found id: ""
	I0617 12:03:28.208936  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.208947  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:28.208955  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:28.209016  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:28.244634  165698 cri.go:89] found id: ""
	I0617 12:03:28.244667  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.244678  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:28.244687  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:28.244699  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:28.300303  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:28.300336  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:28.314227  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:28.314272  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:28.394322  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:28.394350  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:28.394367  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:28.483381  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:28.483413  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:31.026433  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:31.040820  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:31.040888  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:31.086409  165698 cri.go:89] found id: ""
	I0617 12:03:31.086440  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.086453  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:31.086461  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:31.086548  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:31.122810  165698 cri.go:89] found id: ""
	I0617 12:03:31.122836  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.122843  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:31.122849  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:31.122910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:31.157634  165698 cri.go:89] found id: ""
	I0617 12:03:31.157669  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.157680  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:31.157687  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:31.157750  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:31.191498  165698 cri.go:89] found id: ""
	I0617 12:03:31.191529  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.191541  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:31.191549  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:31.191619  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:31.225575  165698 cri.go:89] found id: ""
	I0617 12:03:31.225599  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.225609  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:31.225616  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:31.225670  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:31.269780  165698 cri.go:89] found id: ""
	I0617 12:03:31.269810  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.269819  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:31.269825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:31.269874  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:31.307689  165698 cri.go:89] found id: ""
	I0617 12:03:31.307717  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.307726  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:31.307733  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:31.307789  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:31.344160  165698 cri.go:89] found id: ""
	I0617 12:03:31.344190  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.344200  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:31.344210  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:31.344223  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:31.397627  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:31.397667  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:31.411316  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:31.411347  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:31.486258  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:31.486280  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:31.486297  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:31.568067  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:31.568106  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:30.725183  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:33.224294  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:30.834377  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:33.333070  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:33.335067  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:35.335626  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:37.336117  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:34.111424  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:34.127178  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:34.127255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:34.165900  165698 cri.go:89] found id: ""
	I0617 12:03:34.165936  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.165947  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:34.165955  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:34.166042  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:34.203556  165698 cri.go:89] found id: ""
	I0617 12:03:34.203588  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.203597  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:34.203606  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:34.203659  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:34.243418  165698 cri.go:89] found id: ""
	I0617 12:03:34.243478  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.243490  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:34.243499  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:34.243661  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:34.281542  165698 cri.go:89] found id: ""
	I0617 12:03:34.281569  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.281577  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:34.281582  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:34.281635  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:34.316304  165698 cri.go:89] found id: ""
	I0617 12:03:34.316333  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.316341  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:34.316347  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:34.316403  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:34.357416  165698 cri.go:89] found id: ""
	I0617 12:03:34.357455  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.357467  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:34.357476  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:34.357547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:34.392069  165698 cri.go:89] found id: ""
	I0617 12:03:34.392101  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.392112  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:34.392120  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:34.392185  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:34.427203  165698 cri.go:89] found id: ""
	I0617 12:03:34.427235  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.427247  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:34.427258  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:34.427317  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:34.441346  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:34.441375  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:34.519306  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:34.519331  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:34.519349  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:34.598802  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:34.598843  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:34.637521  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:34.637554  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:37.191259  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:37.205882  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:37.205947  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:37.242175  165698 cri.go:89] found id: ""
	I0617 12:03:37.242202  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.242209  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:37.242215  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:37.242278  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:37.278004  165698 cri.go:89] found id: ""
	I0617 12:03:37.278029  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.278037  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:37.278043  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:37.278091  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:37.322148  165698 cri.go:89] found id: ""
	I0617 12:03:37.322179  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.322190  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:37.322198  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:37.322259  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:37.358612  165698 cri.go:89] found id: ""
	I0617 12:03:37.358638  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.358649  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:37.358657  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:37.358718  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:37.393070  165698 cri.go:89] found id: ""
	I0617 12:03:37.393104  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.393115  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:37.393123  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:37.393187  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:37.429420  165698 cri.go:89] found id: ""
	I0617 12:03:37.429452  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.429465  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:37.429475  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:37.429541  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:37.464485  165698 cri.go:89] found id: ""
	I0617 12:03:37.464509  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.464518  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:37.464523  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:37.464584  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:37.501283  165698 cri.go:89] found id: ""
	I0617 12:03:37.501308  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.501316  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:37.501326  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:37.501338  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:37.552848  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:37.552889  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:37.566715  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:37.566746  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:37.643560  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:37.643584  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:37.643601  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:37.722895  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:37.722935  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:35.225442  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:37.225962  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:35.836693  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:38.332297  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:39.834655  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:42.333686  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:40.268199  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:40.281832  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:40.281905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:40.317094  165698 cri.go:89] found id: ""
	I0617 12:03:40.317137  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.317150  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:40.317159  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:40.317229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:40.355786  165698 cri.go:89] found id: ""
	I0617 12:03:40.355819  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.355829  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:40.355836  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:40.355903  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:40.394282  165698 cri.go:89] found id: ""
	I0617 12:03:40.394312  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.394323  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:40.394332  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:40.394388  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:40.433773  165698 cri.go:89] found id: ""
	I0617 12:03:40.433806  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.433817  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:40.433825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:40.433875  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:40.469937  165698 cri.go:89] found id: ""
	I0617 12:03:40.469973  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.469985  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:40.469998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:40.470067  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:40.503565  165698 cri.go:89] found id: ""
	I0617 12:03:40.503590  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.503599  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:40.503605  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:40.503654  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:40.538349  165698 cri.go:89] found id: ""
	I0617 12:03:40.538383  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.538394  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:40.538402  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:40.538461  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:40.576036  165698 cri.go:89] found id: ""
	I0617 12:03:40.576066  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.576075  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:40.576085  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:40.576100  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:40.617804  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:40.617833  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:40.668126  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:40.668162  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:40.682618  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:40.682655  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:40.759597  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:40.759619  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:40.759638  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:39.725534  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:42.223320  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:40.336855  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:42.832597  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:44.334430  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:46.835809  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:43.343404  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:43.357886  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:43.357953  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:43.398262  165698 cri.go:89] found id: ""
	I0617 12:03:43.398290  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.398301  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:43.398310  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:43.398370  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:43.432241  165698 cri.go:89] found id: ""
	I0617 12:03:43.432272  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.432280  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:43.432289  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:43.432348  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:43.466210  165698 cri.go:89] found id: ""
	I0617 12:03:43.466234  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.466241  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:43.466247  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:43.466294  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:43.501677  165698 cri.go:89] found id: ""
	I0617 12:03:43.501711  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.501723  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:43.501731  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:43.501793  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:43.541826  165698 cri.go:89] found id: ""
	I0617 12:03:43.541860  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.541870  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:43.541876  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:43.541941  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:43.576940  165698 cri.go:89] found id: ""
	I0617 12:03:43.576962  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.576970  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:43.576975  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:43.577022  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:43.612592  165698 cri.go:89] found id: ""
	I0617 12:03:43.612627  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.612635  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:43.612643  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:43.612694  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:43.647141  165698 cri.go:89] found id: ""
	I0617 12:03:43.647176  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.647188  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:43.647202  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:43.647220  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:43.698248  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:43.698283  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:43.711686  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:43.711714  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:43.787077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:43.787101  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:43.787115  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:43.861417  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:43.861455  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:46.402594  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:46.417108  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:46.417185  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:46.453910  165698 cri.go:89] found id: ""
	I0617 12:03:46.453941  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.453952  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:46.453960  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:46.454020  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:46.487239  165698 cri.go:89] found id: ""
	I0617 12:03:46.487268  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.487280  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:46.487288  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:46.487353  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:46.521824  165698 cri.go:89] found id: ""
	I0617 12:03:46.521850  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.521859  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:46.521866  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:46.521929  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:46.557247  165698 cri.go:89] found id: ""
	I0617 12:03:46.557274  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.557282  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:46.557289  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:46.557350  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:46.600354  165698 cri.go:89] found id: ""
	I0617 12:03:46.600383  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.600393  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:46.600402  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:46.600477  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:46.638153  165698 cri.go:89] found id: ""
	I0617 12:03:46.638180  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.638189  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:46.638197  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:46.638255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:46.672636  165698 cri.go:89] found id: ""
	I0617 12:03:46.672661  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.672669  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:46.672675  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:46.672721  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:46.706431  165698 cri.go:89] found id: ""
	I0617 12:03:46.706468  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.706481  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:46.706493  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:46.706509  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:46.720796  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:46.720842  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:46.801343  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:46.801365  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:46.801379  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:46.883651  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:46.883696  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:46.928594  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:46.928630  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:44.224037  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:46.224076  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:48.224472  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:45.332811  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:47.832461  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:49.334678  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:51.833994  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:49.480413  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:49.495558  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:49.495656  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:49.533281  165698 cri.go:89] found id: ""
	I0617 12:03:49.533313  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.533323  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:49.533330  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:49.533396  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:49.573430  165698 cri.go:89] found id: ""
	I0617 12:03:49.573457  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.573465  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:49.573472  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:49.573532  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:49.608669  165698 cri.go:89] found id: ""
	I0617 12:03:49.608697  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.608705  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:49.608711  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:49.608767  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:49.643411  165698 cri.go:89] found id: ""
	I0617 12:03:49.643449  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.643481  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:49.643490  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:49.643557  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:49.680039  165698 cri.go:89] found id: ""
	I0617 12:03:49.680071  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.680082  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:49.680090  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:49.680148  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:49.717169  165698 cri.go:89] found id: ""
	I0617 12:03:49.717195  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.717203  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:49.717209  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:49.717262  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:49.754585  165698 cri.go:89] found id: ""
	I0617 12:03:49.754615  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.754625  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:49.754633  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:49.754697  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:49.796040  165698 cri.go:89] found id: ""
	I0617 12:03:49.796074  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.796085  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:49.796097  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:49.796112  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:49.873496  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:49.873530  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:49.873547  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:49.961883  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:49.961925  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:50.002975  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:50.003004  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:50.054185  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:50.054224  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:52.568557  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:52.584264  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:52.584337  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:52.622474  165698 cri.go:89] found id: ""
	I0617 12:03:52.622501  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.622509  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:52.622516  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:52.622566  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:52.661012  165698 cri.go:89] found id: ""
	I0617 12:03:52.661045  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.661057  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:52.661066  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:52.661133  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:52.700950  165698 cri.go:89] found id: ""
	I0617 12:03:52.700986  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.700998  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:52.701006  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:52.701075  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:52.735663  165698 cri.go:89] found id: ""
	I0617 12:03:52.735689  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.735696  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:52.735702  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:52.735768  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:52.776540  165698 cri.go:89] found id: ""
	I0617 12:03:52.776568  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.776580  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:52.776589  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:52.776642  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:52.812439  165698 cri.go:89] found id: ""
	I0617 12:03:52.812474  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.812493  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:52.812503  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:52.812567  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:52.849233  165698 cri.go:89] found id: ""
	I0617 12:03:52.849263  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.849273  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:52.849281  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:52.849343  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:52.885365  165698 cri.go:89] found id: ""
	I0617 12:03:52.885395  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.885406  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:52.885419  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:52.885434  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:52.941521  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:52.941553  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:52.955958  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:52.955997  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:03:50.224702  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:52.724247  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:50.332871  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:52.832386  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:53.834382  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:55.834813  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	W0617 12:03:53.029254  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:53.029278  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:53.029291  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:53.104391  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:53.104425  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:55.648578  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:55.662143  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:55.662205  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:55.697623  165698 cri.go:89] found id: ""
	I0617 12:03:55.697662  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.697674  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:55.697682  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:55.697751  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:55.734132  165698 cri.go:89] found id: ""
	I0617 12:03:55.734171  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.734184  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:55.734192  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:55.734265  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:55.774178  165698 cri.go:89] found id: ""
	I0617 12:03:55.774212  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.774222  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:55.774231  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:55.774296  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:55.816427  165698 cri.go:89] found id: ""
	I0617 12:03:55.816460  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.816471  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:55.816480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:55.816546  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:55.860413  165698 cri.go:89] found id: ""
	I0617 12:03:55.860446  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.860457  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:55.860465  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:55.860532  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:55.897577  165698 cri.go:89] found id: ""
	I0617 12:03:55.897612  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.897622  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:55.897629  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:55.897682  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:55.934163  165698 cri.go:89] found id: ""
	I0617 12:03:55.934200  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.934212  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:55.934220  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:55.934291  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:55.972781  165698 cri.go:89] found id: ""
	I0617 12:03:55.972827  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.972840  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:55.972852  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:55.972867  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:56.027292  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:56.027332  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:56.042304  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:56.042336  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:56.115129  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:56.115159  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:56.115176  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:56.194161  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:56.194200  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:54.728169  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:57.225361  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:54.837170  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:57.333566  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:58.335846  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:00.833987  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:58.734681  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:58.748467  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:58.748534  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:58.786191  165698 cri.go:89] found id: ""
	I0617 12:03:58.786221  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.786232  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:58.786239  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:58.786302  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:58.822076  165698 cri.go:89] found id: ""
	I0617 12:03:58.822103  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.822125  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:58.822134  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:58.822199  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:58.858830  165698 cri.go:89] found id: ""
	I0617 12:03:58.858859  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.858867  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:58.858873  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:58.858927  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:58.898802  165698 cri.go:89] found id: ""
	I0617 12:03:58.898830  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.898838  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:58.898844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:58.898891  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:58.933234  165698 cri.go:89] found id: ""
	I0617 12:03:58.933269  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.933281  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:58.933289  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:58.933355  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:58.973719  165698 cri.go:89] found id: ""
	I0617 12:03:58.973753  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.973766  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:58.973773  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:58.973847  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:59.010671  165698 cri.go:89] found id: ""
	I0617 12:03:59.010722  165698 logs.go:276] 0 containers: []
	W0617 12:03:59.010734  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:59.010741  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:59.010805  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:59.047318  165698 cri.go:89] found id: ""
	I0617 12:03:59.047347  165698 logs.go:276] 0 containers: []
	W0617 12:03:59.047359  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:59.047372  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:59.047389  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:59.097778  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:59.097815  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:59.111615  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:59.111646  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:59.193172  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:59.193195  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:59.193207  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:59.268147  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:59.268182  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:01.807585  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:01.821634  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:01.821694  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:01.857610  165698 cri.go:89] found id: ""
	I0617 12:04:01.857637  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.857647  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:01.857654  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:01.857710  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:01.893229  165698 cri.go:89] found id: ""
	I0617 12:04:01.893253  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.893261  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:01.893267  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:01.893324  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:01.926916  165698 cri.go:89] found id: ""
	I0617 12:04:01.926940  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.926950  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:01.926958  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:01.927017  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:01.961913  165698 cri.go:89] found id: ""
	I0617 12:04:01.961946  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.961957  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:01.961967  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:01.962045  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:01.997084  165698 cri.go:89] found id: ""
	I0617 12:04:01.997111  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.997119  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:01.997125  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:01.997173  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:02.034640  165698 cri.go:89] found id: ""
	I0617 12:04:02.034666  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.034674  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:02.034680  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:02.034744  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:02.085868  165698 cri.go:89] found id: ""
	I0617 12:04:02.085910  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.085920  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:02.085928  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:02.085983  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:02.152460  165698 cri.go:89] found id: ""
	I0617 12:04:02.152487  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.152499  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:02.152513  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:02.152528  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:02.205297  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:02.205344  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:02.222312  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:02.222348  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:02.299934  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:02.299959  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:02.299977  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:02.384008  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:02.384056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:59.724730  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:02.227215  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:59.833621  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:01.833799  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:02.834076  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:04.836418  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:07.335024  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:04.926889  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:04.940643  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:04.940722  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:04.976246  165698 cri.go:89] found id: ""
	I0617 12:04:04.976275  165698 logs.go:276] 0 containers: []
	W0617 12:04:04.976283  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:04.976289  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:04.976338  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:05.015864  165698 cri.go:89] found id: ""
	I0617 12:04:05.015900  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.015913  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:05.015921  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:05.015985  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:05.054051  165698 cri.go:89] found id: ""
	I0617 12:04:05.054086  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.054099  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:05.054112  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:05.054177  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:05.090320  165698 cri.go:89] found id: ""
	I0617 12:04:05.090358  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.090371  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:05.090380  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:05.090438  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:05.126963  165698 cri.go:89] found id: ""
	I0617 12:04:05.126998  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.127008  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:05.127015  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:05.127087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:05.162565  165698 cri.go:89] found id: ""
	I0617 12:04:05.162600  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.162611  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:05.162620  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:05.162674  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:05.195706  165698 cri.go:89] found id: ""
	I0617 12:04:05.195743  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.195752  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:05.195758  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:05.195826  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:05.236961  165698 cri.go:89] found id: ""
	I0617 12:04:05.236995  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.237006  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:05.237016  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:05.237034  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:05.252754  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:05.252783  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:05.327832  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:05.327870  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:05.327886  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:05.410220  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:05.410271  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:05.451291  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:05.451324  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:04.725172  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:07.223627  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:04.332177  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:06.831712  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:09.834563  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:12.334095  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:08.003058  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:08.016611  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:08.016670  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:08.052947  165698 cri.go:89] found id: ""
	I0617 12:04:08.052984  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.052996  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:08.053004  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:08.053057  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:08.086668  165698 cri.go:89] found id: ""
	I0617 12:04:08.086695  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.086704  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:08.086711  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:08.086773  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:08.127708  165698 cri.go:89] found id: ""
	I0617 12:04:08.127738  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.127746  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:08.127752  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:08.127814  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:08.162930  165698 cri.go:89] found id: ""
	I0617 12:04:08.162959  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.162966  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:08.162973  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:08.163026  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:08.196757  165698 cri.go:89] found id: ""
	I0617 12:04:08.196782  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.196791  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:08.196797  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:08.196851  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:08.229976  165698 cri.go:89] found id: ""
	I0617 12:04:08.230006  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.230016  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:08.230022  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:08.230083  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:08.265969  165698 cri.go:89] found id: ""
	I0617 12:04:08.266000  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.266007  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:08.266013  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:08.266071  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:08.299690  165698 cri.go:89] found id: ""
	I0617 12:04:08.299717  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.299728  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:08.299741  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:08.299761  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:08.353399  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:08.353429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:08.366713  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:08.366739  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:08.442727  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:08.442768  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:08.442786  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:08.527832  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:08.527875  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:11.073616  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:11.087085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:11.087172  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:11.121706  165698 cri.go:89] found id: ""
	I0617 12:04:11.121745  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.121756  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:11.121765  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:11.121839  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:11.157601  165698 cri.go:89] found id: ""
	I0617 12:04:11.157637  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.157648  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:11.157657  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:11.157719  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:11.191929  165698 cri.go:89] found id: ""
	I0617 12:04:11.191963  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.191975  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:11.191983  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:11.192045  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:11.228391  165698 cri.go:89] found id: ""
	I0617 12:04:11.228416  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.228429  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:11.228437  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:11.228497  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:11.261880  165698 cri.go:89] found id: ""
	I0617 12:04:11.261911  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.261924  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:11.261932  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:11.261998  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:11.294615  165698 cri.go:89] found id: ""
	I0617 12:04:11.294663  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.294676  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:11.294684  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:11.294745  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:11.332813  165698 cri.go:89] found id: ""
	I0617 12:04:11.332840  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.332847  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:11.332854  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:11.332911  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:11.369032  165698 cri.go:89] found id: ""
	I0617 12:04:11.369060  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.369068  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:11.369078  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:11.369090  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:11.422522  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:11.422555  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:11.436961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:11.436990  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:11.508679  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:11.508700  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:11.508713  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:11.586574  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:11.586610  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:09.224727  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:11.225763  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:09.330868  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:11.332256  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:14.335171  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:16.836514  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:14.127034  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:14.143228  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:14.143306  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:14.178368  165698 cri.go:89] found id: ""
	I0617 12:04:14.178396  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.178405  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:14.178410  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:14.178459  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:14.209971  165698 cri.go:89] found id: ""
	I0617 12:04:14.210001  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.210010  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:14.210015  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:14.210065  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:14.244888  165698 cri.go:89] found id: ""
	I0617 12:04:14.244922  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.244933  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:14.244940  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:14.244999  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:14.277875  165698 cri.go:89] found id: ""
	I0617 12:04:14.277904  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.277914  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:14.277922  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:14.277983  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:14.312698  165698 cri.go:89] found id: ""
	I0617 12:04:14.312724  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.312733  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:14.312739  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:14.312789  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:14.350952  165698 cri.go:89] found id: ""
	I0617 12:04:14.350977  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.350987  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:14.350993  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:14.351056  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:14.389211  165698 cri.go:89] found id: ""
	I0617 12:04:14.389235  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.389243  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:14.389250  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:14.389297  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:14.426171  165698 cri.go:89] found id: ""
	I0617 12:04:14.426200  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.426211  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:14.426224  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:14.426240  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:14.500403  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:14.500430  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:14.500446  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:14.588041  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:14.588078  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:14.631948  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:14.631987  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:14.681859  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:14.681895  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:17.198754  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:17.212612  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:17.212679  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:17.251011  165698 cri.go:89] found id: ""
	I0617 12:04:17.251041  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.251056  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:17.251065  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:17.251128  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:17.282964  165698 cri.go:89] found id: ""
	I0617 12:04:17.282989  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.282998  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:17.283003  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:17.283060  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:17.315570  165698 cri.go:89] found id: ""
	I0617 12:04:17.315601  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.315622  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:17.315630  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:17.315691  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:17.351186  165698 cri.go:89] found id: ""
	I0617 12:04:17.351212  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.351221  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:17.351228  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:17.351287  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:17.385609  165698 cri.go:89] found id: ""
	I0617 12:04:17.385653  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.385665  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:17.385673  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:17.385741  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:17.423890  165698 cri.go:89] found id: ""
	I0617 12:04:17.423923  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.423935  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:17.423944  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:17.424000  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:17.459543  165698 cri.go:89] found id: ""
	I0617 12:04:17.459575  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.459584  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:17.459592  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:17.459660  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:17.495554  165698 cri.go:89] found id: ""
	I0617 12:04:17.495584  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.495594  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:17.495606  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:17.495632  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:17.547835  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:17.547881  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:17.562391  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:17.562422  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:17.635335  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:17.635368  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:17.635387  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:17.708946  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:17.708988  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:13.724618  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:16.224689  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:13.832533  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:15.833210  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:17.841693  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:19.336775  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:21.835598  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:20.249833  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:20.266234  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:20.266301  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:20.307380  165698 cri.go:89] found id: ""
	I0617 12:04:20.307415  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.307424  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:20.307431  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:20.307508  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:20.347193  165698 cri.go:89] found id: ""
	I0617 12:04:20.347225  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.347235  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:20.347243  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:20.347311  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:20.382673  165698 cri.go:89] found id: ""
	I0617 12:04:20.382711  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.382724  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:20.382732  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:20.382800  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:20.419542  165698 cri.go:89] found id: ""
	I0617 12:04:20.419573  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.419582  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:20.419588  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:20.419652  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:20.454586  165698 cri.go:89] found id: ""
	I0617 12:04:20.454618  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.454629  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:20.454636  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:20.454708  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:20.501094  165698 cri.go:89] found id: ""
	I0617 12:04:20.501123  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.501131  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:20.501137  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:20.501190  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:20.537472  165698 cri.go:89] found id: ""
	I0617 12:04:20.537512  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.537524  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:20.537532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:20.537597  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:20.571477  165698 cri.go:89] found id: ""
	I0617 12:04:20.571509  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.571519  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:20.571532  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:20.571550  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:20.611503  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:20.611540  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:20.663868  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:20.663905  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:20.677679  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:20.677704  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:20.753645  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:20.753663  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:20.753689  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:18.725428  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:21.224314  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:20.333214  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:22.333294  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:24.333835  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:26.335344  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:23.335535  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:23.349700  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:23.349766  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:23.384327  165698 cri.go:89] found id: ""
	I0617 12:04:23.384351  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.384358  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:23.384364  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:23.384417  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:23.427145  165698 cri.go:89] found id: ""
	I0617 12:04:23.427179  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.427190  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:23.427197  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:23.427254  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:23.461484  165698 cri.go:89] found id: ""
	I0617 12:04:23.461511  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.461522  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:23.461532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:23.461600  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:23.501292  165698 cri.go:89] found id: ""
	I0617 12:04:23.501324  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.501334  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:23.501342  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:23.501407  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:23.537605  165698 cri.go:89] found id: ""
	I0617 12:04:23.537639  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.537649  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:23.537654  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:23.537727  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:23.576580  165698 cri.go:89] found id: ""
	I0617 12:04:23.576608  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.576616  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:23.576623  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:23.576685  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:23.613124  165698 cri.go:89] found id: ""
	I0617 12:04:23.613153  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.613161  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:23.613167  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:23.613216  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:23.648662  165698 cri.go:89] found id: ""
	I0617 12:04:23.648688  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.648695  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:23.648705  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:23.648717  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:23.661737  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:23.661762  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:23.732512  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:23.732531  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:23.732547  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:23.810165  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:23.810207  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:23.855099  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:23.855136  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:26.406038  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:26.422243  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:26.422323  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:26.460959  165698 cri.go:89] found id: ""
	I0617 12:04:26.460984  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.460994  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:26.461002  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:26.461078  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:26.498324  165698 cri.go:89] found id: ""
	I0617 12:04:26.498350  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.498362  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:26.498370  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:26.498435  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:26.535299  165698 cri.go:89] found id: ""
	I0617 12:04:26.535335  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.535346  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:26.535354  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:26.535417  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:26.574623  165698 cri.go:89] found id: ""
	I0617 12:04:26.574657  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.574668  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:26.574677  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:26.574738  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:26.611576  165698 cri.go:89] found id: ""
	I0617 12:04:26.611607  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.611615  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:26.611621  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:26.611672  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:26.645664  165698 cri.go:89] found id: ""
	I0617 12:04:26.645692  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.645700  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:26.645706  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:26.645755  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:26.679442  165698 cri.go:89] found id: ""
	I0617 12:04:26.679477  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.679488  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:26.679495  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:26.679544  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:26.713512  165698 cri.go:89] found id: ""
	I0617 12:04:26.713543  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.713551  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:26.713563  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:26.713584  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:26.770823  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:26.770853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:26.784829  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:26.784858  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:26.868457  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:26.868480  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:26.868498  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:26.948522  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:26.948561  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:23.725626  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:26.224874  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:24.830639  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:26.836648  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:28.835682  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:31.335891  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:29.490891  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:29.504202  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:29.504273  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:29.544091  165698 cri.go:89] found id: ""
	I0617 12:04:29.544125  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.544137  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:29.544145  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:29.544203  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:29.581645  165698 cri.go:89] found id: ""
	I0617 12:04:29.581670  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.581679  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:29.581685  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:29.581736  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:29.621410  165698 cri.go:89] found id: ""
	I0617 12:04:29.621437  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.621447  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:29.621455  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:29.621522  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:29.659619  165698 cri.go:89] found id: ""
	I0617 12:04:29.659645  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.659654  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:29.659659  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:29.659718  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:29.698822  165698 cri.go:89] found id: ""
	I0617 12:04:29.698851  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.698859  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:29.698865  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:29.698957  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:29.741648  165698 cri.go:89] found id: ""
	I0617 12:04:29.741673  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.741680  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:29.741686  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:29.741752  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:29.777908  165698 cri.go:89] found id: ""
	I0617 12:04:29.777933  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.777941  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:29.777947  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:29.778013  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:29.812290  165698 cri.go:89] found id: ""
	I0617 12:04:29.812318  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.812328  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:29.812340  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:29.812357  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:29.857527  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:29.857552  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:29.916734  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:29.916776  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:29.930988  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:29.931013  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:30.006055  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:30.006080  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:30.006098  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:32.586549  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:32.600139  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:32.600262  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:32.641527  165698 cri.go:89] found id: ""
	I0617 12:04:32.641554  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.641570  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:32.641579  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:32.641635  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:32.687945  165698 cri.go:89] found id: ""
	I0617 12:04:32.687972  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.687981  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:32.687996  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:32.688068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:32.725586  165698 cri.go:89] found id: ""
	I0617 12:04:32.725618  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.725629  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:32.725639  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:32.725696  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:32.764042  165698 cri.go:89] found id: ""
	I0617 12:04:32.764090  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.764107  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:32.764115  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:32.764183  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:32.800132  165698 cri.go:89] found id: ""
	I0617 12:04:32.800167  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.800180  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:32.800189  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:32.800256  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:32.840313  165698 cri.go:89] found id: ""
	I0617 12:04:32.840348  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.840359  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:32.840367  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:32.840434  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:32.878041  165698 cri.go:89] found id: ""
	I0617 12:04:32.878067  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.878076  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:32.878082  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:32.878134  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:32.913904  165698 cri.go:89] found id: ""
	I0617 12:04:32.913939  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.913950  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:32.913961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:32.913974  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:04:28.725534  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:31.224885  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:29.330706  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:31.331989  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:33.337062  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:35.834807  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	W0617 12:04:32.987900  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:32.987929  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:32.987947  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:33.060919  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:33.060961  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:33.102602  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:33.102629  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:33.154112  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:33.154161  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:35.669336  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:35.682819  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:35.682907  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:35.717542  165698 cri.go:89] found id: ""
	I0617 12:04:35.717571  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.717579  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:35.717586  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:35.717646  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:35.754454  165698 cri.go:89] found id: ""
	I0617 12:04:35.754483  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.754495  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:35.754503  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:35.754566  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:35.791198  165698 cri.go:89] found id: ""
	I0617 12:04:35.791227  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.791237  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:35.791246  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:35.791309  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:35.826858  165698 cri.go:89] found id: ""
	I0617 12:04:35.826892  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.826903  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:35.826911  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:35.826974  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:35.866817  165698 cri.go:89] found id: ""
	I0617 12:04:35.866845  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.866853  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:35.866861  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:35.866909  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:35.918340  165698 cri.go:89] found id: ""
	I0617 12:04:35.918377  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.918388  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:35.918397  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:35.918466  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:35.960734  165698 cri.go:89] found id: ""
	I0617 12:04:35.960764  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.960774  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:35.960779  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:35.960841  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:36.002392  165698 cri.go:89] found id: ""
	I0617 12:04:36.002426  165698 logs.go:276] 0 containers: []
	W0617 12:04:36.002437  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:36.002449  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:36.002465  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:36.055130  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:36.055163  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:36.069181  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:36.069209  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:36.146078  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:36.146105  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:36.146120  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:36.223763  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:36.223797  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:33.723759  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:35.725954  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:38.225200  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:33.833990  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:36.332152  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:38.332570  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:37.836765  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:40.334594  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:42.336958  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:38.767375  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:38.781301  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:38.781357  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:38.821364  165698 cri.go:89] found id: ""
	I0617 12:04:38.821390  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.821400  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:38.821409  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:38.821472  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:38.860727  165698 cri.go:89] found id: ""
	I0617 12:04:38.860784  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.860796  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:38.860803  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:38.860868  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:38.902932  165698 cri.go:89] found id: ""
	I0617 12:04:38.902968  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.902992  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:38.902999  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:38.903088  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:38.940531  165698 cri.go:89] found id: ""
	I0617 12:04:38.940564  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.940576  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:38.940584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:38.940649  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:38.975751  165698 cri.go:89] found id: ""
	I0617 12:04:38.975792  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.975827  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:38.975835  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:38.975908  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:39.011156  165698 cri.go:89] found id: ""
	I0617 12:04:39.011196  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.011206  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:39.011213  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:39.011269  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:39.049266  165698 cri.go:89] found id: ""
	I0617 12:04:39.049301  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.049312  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:39.049320  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:39.049373  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:39.089392  165698 cri.go:89] found id: ""
	I0617 12:04:39.089425  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.089434  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:39.089444  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:39.089459  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:39.166585  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:39.166607  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:39.166619  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:39.241910  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:39.241950  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:39.287751  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:39.287782  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:39.342226  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:39.342259  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:41.857327  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:41.871379  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:41.871446  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:41.907435  165698 cri.go:89] found id: ""
	I0617 12:04:41.907472  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.907483  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:41.907492  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:41.907542  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:41.941684  165698 cri.go:89] found id: ""
	I0617 12:04:41.941725  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.941737  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:41.941745  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:41.941819  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:41.977359  165698 cri.go:89] found id: ""
	I0617 12:04:41.977395  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.977407  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:41.977415  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:41.977478  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:42.015689  165698 cri.go:89] found id: ""
	I0617 12:04:42.015723  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.015734  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:42.015742  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:42.015803  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:42.050600  165698 cri.go:89] found id: ""
	I0617 12:04:42.050626  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.050637  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:42.050645  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:42.050707  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:42.088174  165698 cri.go:89] found id: ""
	I0617 12:04:42.088201  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.088212  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:42.088221  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:42.088290  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:42.127335  165698 cri.go:89] found id: ""
	I0617 12:04:42.127364  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.127375  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:42.127384  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:42.127443  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:42.163435  165698 cri.go:89] found id: ""
	I0617 12:04:42.163481  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.163492  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:42.163505  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:42.163527  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:42.233233  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:42.233262  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:42.233280  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:42.311695  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:42.311741  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:42.378134  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:42.378163  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:42.439614  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:42.439647  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:40.726373  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:43.225144  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:40.336291  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:42.831220  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:44.835811  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:47.335772  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:44.953738  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:44.967822  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:44.967884  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:45.004583  165698 cri.go:89] found id: ""
	I0617 12:04:45.004687  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.004732  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:45.004741  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:45.004797  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:45.038912  165698 cri.go:89] found id: ""
	I0617 12:04:45.038939  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.038949  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:45.038957  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:45.039026  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:45.073594  165698 cri.go:89] found id: ""
	I0617 12:04:45.073620  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.073628  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:45.073634  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:45.073684  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:45.108225  165698 cri.go:89] found id: ""
	I0617 12:04:45.108253  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.108261  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:45.108267  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:45.108317  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:45.139522  165698 cri.go:89] found id: ""
	I0617 12:04:45.139545  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.139553  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:45.139559  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:45.139609  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:45.173705  165698 cri.go:89] found id: ""
	I0617 12:04:45.173735  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.173745  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:45.173752  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:45.173813  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:45.206448  165698 cri.go:89] found id: ""
	I0617 12:04:45.206477  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.206486  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:45.206493  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:45.206551  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:45.242925  165698 cri.go:89] found id: ""
	I0617 12:04:45.242952  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.242962  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:45.242981  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:45.242998  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:45.294669  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:45.294700  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:45.307642  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:45.307670  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:45.381764  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:45.381788  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:45.381805  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:45.469022  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:45.469056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:45.724236  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:48.225656  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:45.332888  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:47.832326  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:49.337260  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:51.338718  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:48.014169  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:48.029895  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:48.029984  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:48.086421  165698 cri.go:89] found id: ""
	I0617 12:04:48.086456  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.086468  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:48.086477  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:48.086554  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:48.135673  165698 cri.go:89] found id: ""
	I0617 12:04:48.135705  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.135713  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:48.135733  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:48.135808  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:48.184330  165698 cri.go:89] found id: ""
	I0617 12:04:48.184353  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.184362  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:48.184368  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:48.184418  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:48.221064  165698 cri.go:89] found id: ""
	I0617 12:04:48.221095  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.221103  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:48.221112  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:48.221175  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:48.264464  165698 cri.go:89] found id: ""
	I0617 12:04:48.264495  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.264502  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:48.264508  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:48.264561  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:48.302144  165698 cri.go:89] found id: ""
	I0617 12:04:48.302180  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.302191  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:48.302199  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:48.302263  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:48.345431  165698 cri.go:89] found id: ""
	I0617 12:04:48.345458  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.345465  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:48.345472  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:48.345539  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:48.383390  165698 cri.go:89] found id: ""
	I0617 12:04:48.383423  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.383434  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:48.383447  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:48.383478  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:48.422328  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:48.422356  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:48.473698  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:48.473735  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:48.488399  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:48.488429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:48.566851  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:48.566871  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:48.566884  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:51.149626  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:51.162855  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:51.162926  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:51.199056  165698 cri.go:89] found id: ""
	I0617 12:04:51.199091  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.199102  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:51.199109  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:51.199172  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:51.238773  165698 cri.go:89] found id: ""
	I0617 12:04:51.238810  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.238821  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:51.238827  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:51.238883  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:51.279049  165698 cri.go:89] found id: ""
	I0617 12:04:51.279079  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.279092  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:51.279100  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:51.279166  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:51.324923  165698 cri.go:89] found id: ""
	I0617 12:04:51.324957  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.324969  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:51.324976  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:51.325028  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:51.363019  165698 cri.go:89] found id: ""
	I0617 12:04:51.363055  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.363068  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:51.363077  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:51.363142  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:51.399620  165698 cri.go:89] found id: ""
	I0617 12:04:51.399652  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.399661  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:51.399675  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:51.399758  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:51.434789  165698 cri.go:89] found id: ""
	I0617 12:04:51.434824  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.434836  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:51.434844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:51.434910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:51.470113  165698 cri.go:89] found id: ""
	I0617 12:04:51.470140  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.470149  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:51.470160  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:51.470176  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:51.526138  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:51.526173  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:51.539451  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:51.539491  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:51.613418  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:51.613437  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:51.613450  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:51.691971  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:51.692010  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:50.724405  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:52.725426  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:50.332363  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:52.332932  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:53.834955  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:56.334584  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:54.234514  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:54.249636  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:54.249724  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:54.283252  165698 cri.go:89] found id: ""
	I0617 12:04:54.283287  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.283300  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:54.283307  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:54.283367  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:54.319153  165698 cri.go:89] found id: ""
	I0617 12:04:54.319207  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.319218  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:54.319226  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:54.319290  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:54.361450  165698 cri.go:89] found id: ""
	I0617 12:04:54.361480  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.361491  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:54.361498  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:54.361562  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:54.397806  165698 cri.go:89] found id: ""
	I0617 12:04:54.397834  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.397843  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:54.397849  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:54.397899  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:54.447119  165698 cri.go:89] found id: ""
	I0617 12:04:54.447147  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.447155  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:54.447161  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:54.447211  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:54.489717  165698 cri.go:89] found id: ""
	I0617 12:04:54.489751  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.489760  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:54.489766  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:54.489830  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:54.532840  165698 cri.go:89] found id: ""
	I0617 12:04:54.532943  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.532975  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:54.532989  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:54.533100  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:54.568227  165698 cri.go:89] found id: ""
	I0617 12:04:54.568369  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.568391  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:54.568403  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:54.568420  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:54.583140  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:54.583174  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:54.661258  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:54.661281  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:54.661296  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:54.750472  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:54.750511  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:54.797438  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:54.797467  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:57.349800  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:57.364820  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:57.364879  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:57.405065  165698 cri.go:89] found id: ""
	I0617 12:04:57.405093  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.405101  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:57.405106  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:57.405153  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:57.445707  165698 cri.go:89] found id: ""
	I0617 12:04:57.445741  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.445752  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:57.445760  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:57.445829  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:57.486911  165698 cri.go:89] found id: ""
	I0617 12:04:57.486940  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.486948  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:57.486955  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:57.487014  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:57.521218  165698 cri.go:89] found id: ""
	I0617 12:04:57.521254  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.521266  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:57.521274  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:57.521342  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:57.555762  165698 cri.go:89] found id: ""
	I0617 12:04:57.555794  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.555803  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:57.555808  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:57.555863  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:57.591914  165698 cri.go:89] found id: ""
	I0617 12:04:57.591945  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.591956  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:57.591971  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:57.592037  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:57.626435  165698 cri.go:89] found id: ""
	I0617 12:04:57.626463  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.626471  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:57.626477  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:57.626527  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:57.665088  165698 cri.go:89] found id: ""
	I0617 12:04:57.665118  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.665126  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:57.665137  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:57.665152  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:57.716284  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:57.716316  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:57.730179  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:57.730204  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:57.808904  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:57.808933  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:57.808954  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:57.894499  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:57.894530  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:55.224507  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:57.224583  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:54.831112  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:56.832477  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:58.334640  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:00.335137  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:00.435957  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:00.450812  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:00.450890  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:00.491404  165698 cri.go:89] found id: ""
	I0617 12:05:00.491432  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.491440  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:00.491446  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:00.491523  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:00.526711  165698 cri.go:89] found id: ""
	I0617 12:05:00.526739  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.526747  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:00.526753  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:00.526817  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:00.562202  165698 cri.go:89] found id: ""
	I0617 12:05:00.562236  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.562246  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:00.562255  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:00.562323  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:00.602754  165698 cri.go:89] found id: ""
	I0617 12:05:00.602790  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.602802  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:00.602811  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:00.602877  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:00.645666  165698 cri.go:89] found id: ""
	I0617 12:05:00.645703  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.645715  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:00.645723  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:00.645788  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:00.684649  165698 cri.go:89] found id: ""
	I0617 12:05:00.684685  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.684694  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:00.684701  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:00.684784  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:00.727139  165698 cri.go:89] found id: ""
	I0617 12:05:00.727160  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.727167  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:00.727173  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:00.727238  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:00.764401  165698 cri.go:89] found id: ""
	I0617 12:05:00.764433  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.764444  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:00.764455  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:00.764474  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:00.777301  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:00.777322  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:00.849752  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:00.849778  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:00.849795  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:00.930220  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:00.930266  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:00.970076  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:00.970116  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:59.226429  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:01.725079  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:59.337081  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:01.834932  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:02.834132  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:05.334066  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:07.335366  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:03.526070  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:03.541150  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:03.541229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:03.584416  165698 cri.go:89] found id: ""
	I0617 12:05:03.584451  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.584463  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:03.584472  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:03.584535  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:03.623509  165698 cri.go:89] found id: ""
	I0617 12:05:03.623543  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.623552  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:03.623558  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:03.623611  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:03.661729  165698 cri.go:89] found id: ""
	I0617 12:05:03.661765  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.661778  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:03.661787  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:03.661852  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:03.702952  165698 cri.go:89] found id: ""
	I0617 12:05:03.702985  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.703008  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:03.703033  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:03.703100  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:03.746534  165698 cri.go:89] found id: ""
	I0617 12:05:03.746570  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.746578  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:03.746584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:03.746648  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:03.784472  165698 cri.go:89] found id: ""
	I0617 12:05:03.784506  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.784515  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:03.784522  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:03.784580  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:03.821033  165698 cri.go:89] found id: ""
	I0617 12:05:03.821066  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.821077  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:03.821085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:03.821146  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:03.859438  165698 cri.go:89] found id: ""
	I0617 12:05:03.859474  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.859487  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:03.859497  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:03.859513  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:03.940723  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:03.940770  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:03.986267  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:03.986303  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:04.037999  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:04.038039  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:04.051382  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:04.051415  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:04.121593  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:06.622475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:06.636761  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:06.636842  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:06.673954  165698 cri.go:89] found id: ""
	I0617 12:05:06.673995  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.674007  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:06.674015  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:06.674084  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:06.708006  165698 cri.go:89] found id: ""
	I0617 12:05:06.708037  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.708047  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:06.708055  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:06.708124  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:06.743819  165698 cri.go:89] found id: ""
	I0617 12:05:06.743852  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.743864  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:06.743872  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:06.743934  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:06.781429  165698 cri.go:89] found id: ""
	I0617 12:05:06.781457  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.781465  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:06.781473  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:06.781540  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:06.818404  165698 cri.go:89] found id: ""
	I0617 12:05:06.818435  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.818447  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:06.818456  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:06.818516  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:06.857880  165698 cri.go:89] found id: ""
	I0617 12:05:06.857913  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.857924  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:06.857933  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:06.857993  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:06.893010  165698 cri.go:89] found id: ""
	I0617 12:05:06.893050  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.893059  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:06.893065  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:06.893118  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:06.926302  165698 cri.go:89] found id: ""
	I0617 12:05:06.926336  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.926347  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:06.926360  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:06.926378  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:06.997173  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:06.997197  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:06.997215  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:07.082843  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:07.082885  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:07.122542  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:07.122572  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:07.177033  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:07.177070  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:03.725338  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:06.225466  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:04.331639  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:06.331988  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:08.332139  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:09.835119  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:12.333346  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:09.693217  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:09.707043  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:09.707110  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:09.742892  165698 cri.go:89] found id: ""
	I0617 12:05:09.742918  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.742927  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:09.742933  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:09.742982  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:09.776938  165698 cri.go:89] found id: ""
	I0617 12:05:09.776969  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.776976  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:09.776982  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:09.777030  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:09.813613  165698 cri.go:89] found id: ""
	I0617 12:05:09.813643  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.813651  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:09.813658  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:09.813705  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:09.855483  165698 cri.go:89] found id: ""
	I0617 12:05:09.855516  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.855525  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:09.855532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:09.855596  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:09.890808  165698 cri.go:89] found id: ""
	I0617 12:05:09.890844  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.890854  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:09.890862  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:09.890930  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:09.927656  165698 cri.go:89] found id: ""
	I0617 12:05:09.927684  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.927693  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:09.927703  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:09.927758  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:09.968130  165698 cri.go:89] found id: ""
	I0617 12:05:09.968163  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.968174  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:09.968183  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:09.968239  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:10.010197  165698 cri.go:89] found id: ""
	I0617 12:05:10.010220  165698 logs.go:276] 0 containers: []
	W0617 12:05:10.010228  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:10.010239  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:10.010252  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:10.063999  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:10.064040  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:10.078837  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:10.078873  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:10.155932  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:10.155954  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:10.155967  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:10.232859  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:10.232901  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:12.772943  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:12.787936  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:12.788024  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:12.828457  165698 cri.go:89] found id: ""
	I0617 12:05:12.828483  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.828491  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:12.828498  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:12.828562  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:12.862265  165698 cri.go:89] found id: ""
	I0617 12:05:12.862296  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.862306  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:12.862313  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:12.862372  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:12.899673  165698 cri.go:89] found id: ""
	I0617 12:05:12.899698  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.899706  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:12.899712  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:12.899759  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:12.943132  165698 cri.go:89] found id: ""
	I0617 12:05:12.943161  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.943169  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:12.943175  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:12.943227  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:08.724369  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:10.725166  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:13.224799  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:10.333769  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:12.832493  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:14.336437  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:16.835155  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:12.985651  165698 cri.go:89] found id: ""
	I0617 12:05:12.985677  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.985685  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:12.985691  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:12.985747  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:13.021484  165698 cri.go:89] found id: ""
	I0617 12:05:13.021508  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.021516  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:13.021522  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:13.021569  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:13.060658  165698 cri.go:89] found id: ""
	I0617 12:05:13.060689  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.060705  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:13.060713  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:13.060782  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:13.106008  165698 cri.go:89] found id: ""
	I0617 12:05:13.106041  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.106052  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:13.106066  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:13.106083  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:13.160199  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:13.160231  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:13.173767  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:13.173804  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:13.245358  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:13.245383  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:13.245399  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:13.323046  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:13.323085  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:15.872024  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:15.885550  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:15.885624  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:15.920303  165698 cri.go:89] found id: ""
	I0617 12:05:15.920332  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.920344  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:15.920358  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:15.920423  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:15.955132  165698 cri.go:89] found id: ""
	I0617 12:05:15.955158  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.955166  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:15.955172  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:15.955220  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:15.992995  165698 cri.go:89] found id: ""
	I0617 12:05:15.993034  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.993053  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:15.993060  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:15.993127  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:16.032603  165698 cri.go:89] found id: ""
	I0617 12:05:16.032638  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.032650  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:16.032658  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:16.032716  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:16.071770  165698 cri.go:89] found id: ""
	I0617 12:05:16.071804  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.071816  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:16.071824  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:16.071899  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:16.106172  165698 cri.go:89] found id: ""
	I0617 12:05:16.106206  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.106218  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:16.106226  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:16.106292  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:16.139406  165698 cri.go:89] found id: ""
	I0617 12:05:16.139436  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.139443  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:16.139449  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:16.139517  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:16.174513  165698 cri.go:89] found id: ""
	I0617 12:05:16.174554  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.174565  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:16.174580  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:16.174597  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:16.240912  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:16.240940  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:16.240958  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:16.323853  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:16.323891  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:16.372632  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:16.372659  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:16.428367  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:16.428406  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:15.224918  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:17.725226  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:15.332512  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:17.833710  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:19.334324  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:21.334654  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:18.943551  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:18.957394  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:18.957490  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:18.991967  165698 cri.go:89] found id: ""
	I0617 12:05:18.992006  165698 logs.go:276] 0 containers: []
	W0617 12:05:18.992017  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:18.992027  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:18.992092  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:19.025732  165698 cri.go:89] found id: ""
	I0617 12:05:19.025763  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.025775  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:19.025783  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:19.025856  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:19.061786  165698 cri.go:89] found id: ""
	I0617 12:05:19.061820  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.061830  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:19.061838  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:19.061906  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:19.098819  165698 cri.go:89] found id: ""
	I0617 12:05:19.098856  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.098868  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:19.098876  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:19.098947  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:19.139840  165698 cri.go:89] found id: ""
	I0617 12:05:19.139877  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.139886  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:19.139894  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:19.139965  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:19.176546  165698 cri.go:89] found id: ""
	I0617 12:05:19.176578  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.176590  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:19.176598  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:19.176671  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:19.209948  165698 cri.go:89] found id: ""
	I0617 12:05:19.209985  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.209997  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:19.210005  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:19.210087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:19.246751  165698 cri.go:89] found id: ""
	I0617 12:05:19.246788  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.246799  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:19.246812  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:19.246830  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:19.322272  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:19.322316  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:19.370147  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:19.370187  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:19.422699  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:19.422749  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:19.437255  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:19.437284  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:19.510077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:22.010840  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:22.024791  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:22.024879  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:22.060618  165698 cri.go:89] found id: ""
	I0617 12:05:22.060658  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.060667  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:22.060674  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:22.060742  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:22.100228  165698 cri.go:89] found id: ""
	I0617 12:05:22.100259  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.100268  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:22.100274  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:22.100343  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:22.135629  165698 cri.go:89] found id: ""
	I0617 12:05:22.135657  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.135665  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:22.135671  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:22.135730  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:22.186027  165698 cri.go:89] found id: ""
	I0617 12:05:22.186064  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.186076  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:22.186085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:22.186148  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:22.220991  165698 cri.go:89] found id: ""
	I0617 12:05:22.221019  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.221029  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:22.221037  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:22.221104  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:22.266306  165698 cri.go:89] found id: ""
	I0617 12:05:22.266337  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.266348  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:22.266357  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:22.266414  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:22.303070  165698 cri.go:89] found id: ""
	I0617 12:05:22.303104  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.303116  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:22.303124  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:22.303190  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:22.339792  165698 cri.go:89] found id: ""
	I0617 12:05:22.339819  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.339829  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:22.339840  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:22.339856  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:22.422360  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:22.422397  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:22.465744  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:22.465777  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:22.516199  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:22.516232  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:22.529961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:22.529983  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:22.601519  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:20.225369  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:22.226699  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:19.834562  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:21.837426  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:23.336540  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:25.835706  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:25.102655  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:25.116893  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:25.116959  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:25.156370  165698 cri.go:89] found id: ""
	I0617 12:05:25.156396  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.156404  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:25.156410  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:25.156468  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:25.193123  165698 cri.go:89] found id: ""
	I0617 12:05:25.193199  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.193221  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:25.193234  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:25.193301  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:25.232182  165698 cri.go:89] found id: ""
	I0617 12:05:25.232209  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.232219  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:25.232227  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:25.232285  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:25.266599  165698 cri.go:89] found id: ""
	I0617 12:05:25.266630  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.266639  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:25.266645  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:25.266701  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:25.308732  165698 cri.go:89] found id: ""
	I0617 12:05:25.308762  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.308770  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:25.308776  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:25.308836  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:25.348817  165698 cri.go:89] found id: ""
	I0617 12:05:25.348858  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.348871  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:25.348879  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:25.348946  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:25.389343  165698 cri.go:89] found id: ""
	I0617 12:05:25.389375  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.389387  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:25.389393  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:25.389452  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:25.427014  165698 cri.go:89] found id: ""
	I0617 12:05:25.427043  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.427055  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:25.427067  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:25.427083  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:25.441361  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:25.441390  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:25.518967  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:25.518993  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:25.519006  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:25.601411  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:25.601450  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:25.651636  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:25.651674  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:24.725515  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:27.223821  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:24.333548  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:26.832428  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:27.836661  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:30.334313  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:32.336489  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:28.202148  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:28.215710  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:28.215792  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:28.254961  165698 cri.go:89] found id: ""
	I0617 12:05:28.254986  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.255000  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:28.255007  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:28.255061  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:28.292574  165698 cri.go:89] found id: ""
	I0617 12:05:28.292606  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.292614  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:28.292620  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:28.292683  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:28.329036  165698 cri.go:89] found id: ""
	I0617 12:05:28.329067  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.329077  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:28.329085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:28.329152  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:28.366171  165698 cri.go:89] found id: ""
	I0617 12:05:28.366197  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.366206  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:28.366212  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:28.366273  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:28.401380  165698 cri.go:89] found id: ""
	I0617 12:05:28.401407  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.401417  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:28.401424  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:28.401486  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:28.438767  165698 cri.go:89] found id: ""
	I0617 12:05:28.438798  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.438810  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:28.438817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:28.438876  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:28.472706  165698 cri.go:89] found id: ""
	I0617 12:05:28.472761  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.472772  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:28.472779  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:28.472829  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:28.509525  165698 cri.go:89] found id: ""
	I0617 12:05:28.509548  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.509556  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:28.509565  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:28.509577  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:28.606008  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:28.606059  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:28.665846  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:28.665874  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:28.721599  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:28.721627  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:28.735040  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:28.735062  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:28.811954  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:31.312554  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:31.326825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:31.326905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:31.364862  165698 cri.go:89] found id: ""
	I0617 12:05:31.364891  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.364902  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:31.364910  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:31.364976  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:31.396979  165698 cri.go:89] found id: ""
	I0617 12:05:31.397013  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.397027  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:31.397035  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:31.397098  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:31.430617  165698 cri.go:89] found id: ""
	I0617 12:05:31.430647  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.430657  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:31.430665  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:31.430728  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:31.462308  165698 cri.go:89] found id: ""
	I0617 12:05:31.462338  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.462345  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:31.462350  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:31.462399  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:31.495406  165698 cri.go:89] found id: ""
	I0617 12:05:31.495435  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.495444  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:31.495452  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:31.495553  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:31.538702  165698 cri.go:89] found id: ""
	I0617 12:05:31.538729  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.538739  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:31.538750  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:31.538813  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:31.572637  165698 cri.go:89] found id: ""
	I0617 12:05:31.572666  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.572677  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:31.572685  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:31.572745  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:31.609307  165698 cri.go:89] found id: ""
	I0617 12:05:31.609341  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.609352  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:31.609364  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:31.609380  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:31.622445  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:31.622471  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:31.699170  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:31.699191  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:31.699209  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:31.775115  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:31.775156  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:31.815836  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:31.815866  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:29.225028  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:31.727009  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:29.333400  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:31.834599  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:34.836093  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:37.335140  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:34.372097  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:34.393542  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:34.393607  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:34.437265  165698 cri.go:89] found id: ""
	I0617 12:05:34.437294  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.437305  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:34.437314  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:34.437382  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:34.474566  165698 cri.go:89] found id: ""
	I0617 12:05:34.474596  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.474609  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:34.474617  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:34.474680  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:34.510943  165698 cri.go:89] found id: ""
	I0617 12:05:34.510975  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.510986  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:34.511000  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:34.511072  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:34.548124  165698 cri.go:89] found id: ""
	I0617 12:05:34.548160  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.548172  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:34.548179  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:34.548241  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:34.582428  165698 cri.go:89] found id: ""
	I0617 12:05:34.582453  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.582460  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:34.582467  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:34.582514  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:34.616895  165698 cri.go:89] found id: ""
	I0617 12:05:34.616937  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.616950  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:34.616957  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:34.617019  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:34.656116  165698 cri.go:89] found id: ""
	I0617 12:05:34.656144  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.656155  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:34.656162  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:34.656226  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:34.695649  165698 cri.go:89] found id: ""
	I0617 12:05:34.695680  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.695692  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:34.695705  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:34.695722  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:34.747910  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:34.747956  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:34.762177  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:34.762206  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:34.840395  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:34.840423  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:34.840440  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:34.922962  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:34.923002  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:37.464659  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:37.480351  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:37.480416  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:37.521249  165698 cri.go:89] found id: ""
	I0617 12:05:37.521279  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.521286  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:37.521293  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:37.521340  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:37.561053  165698 cri.go:89] found id: ""
	I0617 12:05:37.561079  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.561087  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:37.561094  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:37.561151  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:37.599019  165698 cri.go:89] found id: ""
	I0617 12:05:37.599057  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.599066  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:37.599074  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:37.599134  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:37.638276  165698 cri.go:89] found id: ""
	I0617 12:05:37.638304  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.638315  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:37.638323  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:37.638389  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:37.677819  165698 cri.go:89] found id: ""
	I0617 12:05:37.677845  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.677853  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:37.677859  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:37.677910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:37.715850  165698 cri.go:89] found id: ""
	I0617 12:05:37.715877  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.715888  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:37.715897  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:37.715962  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:37.755533  165698 cri.go:89] found id: ""
	I0617 12:05:37.755563  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.755570  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:37.755576  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:37.755636  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:37.791826  165698 cri.go:89] found id: ""
	I0617 12:05:37.791850  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.791859  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:37.791872  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:37.791888  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:37.844824  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:37.844853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:37.860933  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:37.860963  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:37.926497  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:37.926519  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:37.926535  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:34.224078  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:36.224464  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:38.224753  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:34.333888  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:36.832374  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:39.336299  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:41.834494  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:38.003814  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:38.003853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:40.546386  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:40.560818  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:40.560896  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:40.596737  165698 cri.go:89] found id: ""
	I0617 12:05:40.596777  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.596784  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:40.596791  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:40.596844  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:40.631518  165698 cri.go:89] found id: ""
	I0617 12:05:40.631556  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.631570  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:40.631611  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:40.631683  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:40.674962  165698 cri.go:89] found id: ""
	I0617 12:05:40.674997  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.675006  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:40.675012  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:40.675064  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:40.716181  165698 cri.go:89] found id: ""
	I0617 12:05:40.716210  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.716218  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:40.716226  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:40.716286  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:40.756312  165698 cri.go:89] found id: ""
	I0617 12:05:40.756339  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.756348  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:40.756353  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:40.756406  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:40.791678  165698 cri.go:89] found id: ""
	I0617 12:05:40.791733  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.791750  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:40.791759  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:40.791830  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:40.830717  165698 cri.go:89] found id: ""
	I0617 12:05:40.830754  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.830766  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:40.830774  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:40.830854  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:40.868139  165698 cri.go:89] found id: ""
	I0617 12:05:40.868169  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.868178  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:40.868198  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:40.868224  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:40.920319  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:40.920353  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:40.934948  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:40.934974  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:41.005349  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:41.005371  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:41.005388  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:41.086783  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:41.086842  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:40.724767  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:43.223836  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:38.834031  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:41.331190  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:43.332593  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:44.334114  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:46.334595  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:43.625515  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:43.638942  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:43.639019  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:43.673703  165698 cri.go:89] found id: ""
	I0617 12:05:43.673735  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.673747  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:43.673756  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:43.673822  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:43.709417  165698 cri.go:89] found id: ""
	I0617 12:05:43.709449  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.709460  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:43.709468  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:43.709529  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:43.742335  165698 cri.go:89] found id: ""
	I0617 12:05:43.742368  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.742379  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:43.742389  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:43.742449  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:43.779112  165698 cri.go:89] found id: ""
	I0617 12:05:43.779141  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.779150  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:43.779155  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:43.779219  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:43.813362  165698 cri.go:89] found id: ""
	I0617 12:05:43.813397  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.813406  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:43.813414  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:43.813464  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:43.850456  165698 cri.go:89] found id: ""
	I0617 12:05:43.850484  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.850493  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:43.850499  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:43.850547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:43.884527  165698 cri.go:89] found id: ""
	I0617 12:05:43.884555  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.884564  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:43.884571  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:43.884632  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:43.921440  165698 cri.go:89] found id: ""
	I0617 12:05:43.921476  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.921488  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:43.921501  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:43.921517  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:43.973687  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:43.973727  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:43.988114  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:43.988143  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:44.055084  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:44.055119  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:44.055138  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:44.134628  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:44.134665  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:46.677852  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:46.690688  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:46.690747  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:46.724055  165698 cri.go:89] found id: ""
	I0617 12:05:46.724090  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.724101  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:46.724110  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:46.724171  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:46.759119  165698 cri.go:89] found id: ""
	I0617 12:05:46.759150  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.759161  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:46.759169  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:46.759227  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:46.796392  165698 cri.go:89] found id: ""
	I0617 12:05:46.796424  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.796435  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:46.796442  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:46.796504  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:46.831727  165698 cri.go:89] found id: ""
	I0617 12:05:46.831761  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.831770  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:46.831777  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:46.831845  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:46.866662  165698 cri.go:89] found id: ""
	I0617 12:05:46.866693  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.866702  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:46.866708  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:46.866757  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:46.905045  165698 cri.go:89] found id: ""
	I0617 12:05:46.905070  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.905078  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:46.905084  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:46.905130  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:46.940879  165698 cri.go:89] found id: ""
	I0617 12:05:46.940907  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.940915  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:46.940926  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:46.940974  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:46.977247  165698 cri.go:89] found id: ""
	I0617 12:05:46.977290  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.977301  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:46.977314  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:46.977331  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:47.046094  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:47.046116  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:47.046133  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:47.122994  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:47.123038  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:47.166273  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:47.166313  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:47.221392  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:47.221429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:45.228807  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:47.723584  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:45.834805  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:48.333121  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:48.335758  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:50.833989  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:49.739113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:49.752880  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:49.753004  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:49.791177  165698 cri.go:89] found id: ""
	I0617 12:05:49.791218  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.791242  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:49.791251  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:49.791322  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:49.831602  165698 cri.go:89] found id: ""
	I0617 12:05:49.831633  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.831644  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:49.831652  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:49.831719  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:49.870962  165698 cri.go:89] found id: ""
	I0617 12:05:49.870998  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.871011  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:49.871019  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:49.871092  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:49.917197  165698 cri.go:89] found id: ""
	I0617 12:05:49.917232  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.917243  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:49.917252  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:49.917320  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:49.952997  165698 cri.go:89] found id: ""
	I0617 12:05:49.953034  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.953047  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:49.953056  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:49.953114  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:50.001925  165698 cri.go:89] found id: ""
	I0617 12:05:50.001965  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.001977  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:50.001986  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:50.002059  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:50.043374  165698 cri.go:89] found id: ""
	I0617 12:05:50.043403  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.043412  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:50.043419  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:50.043496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:50.082974  165698 cri.go:89] found id: ""
	I0617 12:05:50.083009  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.083020  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:50.083029  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:50.083043  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:50.134116  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:50.134159  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:50.148478  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:50.148511  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:50.227254  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:50.227276  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:50.227288  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:50.305920  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:50.305960  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:52.848811  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:52.862612  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:52.862669  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:52.896379  165698 cri.go:89] found id: ""
	I0617 12:05:52.896410  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.896421  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:52.896429  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:52.896488  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:52.933387  165698 cri.go:89] found id: ""
	I0617 12:05:52.933422  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.933432  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:52.933439  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:52.933501  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:52.971055  165698 cri.go:89] found id: ""
	I0617 12:05:52.971091  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.971102  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:52.971110  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:52.971168  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:49.724816  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:52.224660  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:50.334092  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:52.831686  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:52.835473  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:55.334017  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:57.334957  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:53.003815  165698 cri.go:89] found id: ""
	I0617 12:05:53.003846  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.003857  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:53.003864  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:53.003927  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:53.039133  165698 cri.go:89] found id: ""
	I0617 12:05:53.039161  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.039169  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:53.039175  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:53.039229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:53.077703  165698 cri.go:89] found id: ""
	I0617 12:05:53.077756  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.077773  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:53.077780  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:53.077831  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:53.119187  165698 cri.go:89] found id: ""
	I0617 12:05:53.119216  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.119223  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:53.119230  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:53.119287  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:53.154423  165698 cri.go:89] found id: ""
	I0617 12:05:53.154457  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.154467  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:53.154480  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:53.154496  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:53.202745  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:53.202778  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:53.216510  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:53.216537  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:53.295687  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:53.295712  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:53.295732  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:53.375064  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:53.375095  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:55.915113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:55.929155  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:55.929239  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:55.964589  165698 cri.go:89] found id: ""
	I0617 12:05:55.964625  165698 logs.go:276] 0 containers: []
	W0617 12:05:55.964634  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:55.964640  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:55.964702  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:56.003659  165698 cri.go:89] found id: ""
	I0617 12:05:56.003691  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.003701  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:56.003709  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:56.003778  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:56.039674  165698 cri.go:89] found id: ""
	I0617 12:05:56.039707  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.039717  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:56.039724  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:56.039786  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:56.077695  165698 cri.go:89] found id: ""
	I0617 12:05:56.077736  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.077748  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:56.077756  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:56.077826  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:56.116397  165698 cri.go:89] found id: ""
	I0617 12:05:56.116430  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.116442  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:56.116451  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:56.116512  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:56.152395  165698 cri.go:89] found id: ""
	I0617 12:05:56.152433  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.152445  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:56.152454  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:56.152513  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:56.189740  165698 cri.go:89] found id: ""
	I0617 12:05:56.189776  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.189788  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:56.189796  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:56.189866  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:56.228017  165698 cri.go:89] found id: ""
	I0617 12:05:56.228047  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.228055  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:56.228063  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:56.228076  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:56.279032  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:56.279079  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:56.294369  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:56.294394  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:56.369507  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:56.369535  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:56.369551  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:56.454797  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:56.454833  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:54.725303  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:56.726247  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:56.726280  165060 pod_ready.go:81] duration metric: took 4m0.008373114s for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	E0617 12:05:56.726291  165060 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0617 12:05:56.726298  165060 pod_ready.go:38] duration metric: took 4m3.608691328s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:05:56.726315  165060 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:05:56.726352  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:56.726411  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:56.784765  165060 cri.go:89] found id: "5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:05:56.784792  165060 cri.go:89] found id: ""
	I0617 12:05:56.784803  165060 logs.go:276] 1 containers: [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3]
	I0617 12:05:56.784865  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.791125  165060 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:56.791189  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:56.830691  165060 cri.go:89] found id: "fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:05:56.830715  165060 cri.go:89] found id: ""
	I0617 12:05:56.830725  165060 logs.go:276] 1 containers: [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9]
	I0617 12:05:56.830785  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.836214  165060 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:56.836282  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:56.875812  165060 cri.go:89] found id: "c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:05:56.875830  165060 cri.go:89] found id: ""
	I0617 12:05:56.875837  165060 logs.go:276] 1 containers: [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7]
	I0617 12:05:56.875891  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.880190  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:56.880247  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:56.925155  165060 cri.go:89] found id: "157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:05:56.925178  165060 cri.go:89] found id: ""
	I0617 12:05:56.925186  165060 logs.go:276] 1 containers: [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d]
	I0617 12:05:56.925231  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.930317  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:56.930384  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:56.972479  165060 cri.go:89] found id: "c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:05:56.972503  165060 cri.go:89] found id: ""
	I0617 12:05:56.972512  165060 logs.go:276] 1 containers: [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d]
	I0617 12:05:56.972559  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.977635  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:56.977696  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:57.012791  165060 cri.go:89] found id: "2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:05:57.012816  165060 cri.go:89] found id: ""
	I0617 12:05:57.012826  165060 logs.go:276] 1 containers: [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079]
	I0617 12:05:57.012882  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:57.016856  165060 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:57.016909  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:57.052111  165060 cri.go:89] found id: ""
	I0617 12:05:57.052146  165060 logs.go:276] 0 containers: []
	W0617 12:05:57.052156  165060 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:57.052163  165060 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:05:57.052211  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:05:57.094600  165060 cri.go:89] found id: "02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:05:57.094619  165060 cri.go:89] found id: "7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:05:57.094622  165060 cri.go:89] found id: ""
	I0617 12:05:57.094630  165060 logs.go:276] 2 containers: [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36]
	I0617 12:05:57.094700  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:57.099250  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:57.104252  165060 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:57.104281  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:57.162000  165060 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:57.162027  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:05:57.285448  165060 logs.go:123] Gathering logs for etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] ...
	I0617 12:05:57.285490  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:05:57.340781  165060 logs.go:123] Gathering logs for coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] ...
	I0617 12:05:57.340820  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:05:57.383507  165060 logs.go:123] Gathering logs for kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] ...
	I0617 12:05:57.383540  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:05:57.428747  165060 logs.go:123] Gathering logs for kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] ...
	I0617 12:05:57.428792  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:05:57.468739  165060 logs.go:123] Gathering logs for kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] ...
	I0617 12:05:57.468770  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:05:57.531317  165060 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:57.531355  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:58.063787  165060 logs.go:123] Gathering logs for container status ...
	I0617 12:05:58.063838  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:58.129384  165060 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:58.129416  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:58.144078  165060 logs.go:123] Gathering logs for kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] ...
	I0617 12:05:58.144152  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:05:58.189028  165060 logs.go:123] Gathering logs for storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] ...
	I0617 12:05:58.189068  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:05:58.227144  165060 logs.go:123] Gathering logs for storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] ...
	I0617 12:05:58.227178  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:05:54.838580  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:57.333884  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:59.836198  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:01.837155  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:58.995221  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:59.008481  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:59.008555  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:59.043854  165698 cri.go:89] found id: ""
	I0617 12:05:59.043887  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.043914  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:59.043935  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:59.044003  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:59.081488  165698 cri.go:89] found id: ""
	I0617 12:05:59.081522  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.081530  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:59.081537  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:59.081596  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:59.118193  165698 cri.go:89] found id: ""
	I0617 12:05:59.118222  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.118232  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:59.118240  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:59.118306  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:59.150286  165698 cri.go:89] found id: ""
	I0617 12:05:59.150315  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.150327  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:59.150335  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:59.150381  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:59.191426  165698 cri.go:89] found id: ""
	I0617 12:05:59.191450  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.191485  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:59.191493  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:59.191547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:59.224933  165698 cri.go:89] found id: ""
	I0617 12:05:59.224965  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.224974  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:59.224998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:59.225061  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:59.255929  165698 cri.go:89] found id: ""
	I0617 12:05:59.255956  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.255965  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:59.255971  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:59.256025  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:59.293072  165698 cri.go:89] found id: ""
	I0617 12:05:59.293097  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.293104  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:59.293114  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:59.293126  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:59.354240  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:59.354267  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:59.367715  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:59.367744  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:59.446352  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:59.446381  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:59.446396  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:59.528701  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:59.528738  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:02.071616  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:02.088050  165698 kubeadm.go:591] duration metric: took 4m3.493743262s to restartPrimaryControlPlane
	W0617 12:06:02.088159  165698 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0617 12:06:02.088194  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:06:02.552133  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:02.570136  165698 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:06:02.582299  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:06:02.594775  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:06:02.594809  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:06:02.594867  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:06:02.605875  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:06:02.605954  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:06:02.617780  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:06:02.628284  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:06:02.628359  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:06:02.639128  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:06:02.650079  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:06:02.650144  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:06:02.660879  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:06:02.671170  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:06:02.671249  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:06:02.682071  165698 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:06:02.753750  165698 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0617 12:06:02.753913  165698 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:06:02.897384  165698 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:06:02.897530  165698 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:06:02.897685  165698 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:06:03.079116  165698 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:06:00.764533  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:00.781564  165060 api_server.go:72] duration metric: took 4m14.875617542s to wait for apiserver process to appear ...
	I0617 12:06:00.781593  165060 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:06:00.781642  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:00.781706  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:00.817980  165060 cri.go:89] found id: "5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:00.818013  165060 cri.go:89] found id: ""
	I0617 12:06:00.818024  165060 logs.go:276] 1 containers: [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3]
	I0617 12:06:00.818080  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.822664  165060 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:00.822759  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:00.861518  165060 cri.go:89] found id: "fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:00.861545  165060 cri.go:89] found id: ""
	I0617 12:06:00.861556  165060 logs.go:276] 1 containers: [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9]
	I0617 12:06:00.861614  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.865885  165060 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:00.865973  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:00.900844  165060 cri.go:89] found id: "c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:00.900864  165060 cri.go:89] found id: ""
	I0617 12:06:00.900875  165060 logs.go:276] 1 containers: [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7]
	I0617 12:06:00.900930  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.905253  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:00.905317  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:00.938998  165060 cri.go:89] found id: "157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:00.939036  165060 cri.go:89] found id: ""
	I0617 12:06:00.939046  165060 logs.go:276] 1 containers: [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d]
	I0617 12:06:00.939114  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.943170  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:00.943234  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:00.982923  165060 cri.go:89] found id: "c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:00.982953  165060 cri.go:89] found id: ""
	I0617 12:06:00.982964  165060 logs.go:276] 1 containers: [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d]
	I0617 12:06:00.983034  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.987696  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:00.987769  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:01.033789  165060 cri.go:89] found id: "2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:06:01.033825  165060 cri.go:89] found id: ""
	I0617 12:06:01.033837  165060 logs.go:276] 1 containers: [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079]
	I0617 12:06:01.033901  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:01.038800  165060 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:01.038861  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:01.077797  165060 cri.go:89] found id: ""
	I0617 12:06:01.077834  165060 logs.go:276] 0 containers: []
	W0617 12:06:01.077846  165060 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:01.077855  165060 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:01.077916  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:01.116275  165060 cri.go:89] found id: "02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:01.116296  165060 cri.go:89] found id: "7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:01.116303  165060 cri.go:89] found id: ""
	I0617 12:06:01.116311  165060 logs.go:276] 2 containers: [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36]
	I0617 12:06:01.116365  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:01.121088  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:01.125393  165060 logs.go:123] Gathering logs for container status ...
	I0617 12:06:01.125417  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:01.170817  165060 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:01.170844  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:01.223072  165060 logs.go:123] Gathering logs for kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] ...
	I0617 12:06:01.223114  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:01.269212  165060 logs.go:123] Gathering logs for kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] ...
	I0617 12:06:01.269245  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:01.313518  165060 logs.go:123] Gathering logs for kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] ...
	I0617 12:06:01.313557  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:01.357935  165060 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:01.357965  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:01.784493  165060 logs.go:123] Gathering logs for storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] ...
	I0617 12:06:01.784542  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:01.825824  165060 logs.go:123] Gathering logs for storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] ...
	I0617 12:06:01.825851  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:01.866216  165060 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:01.866252  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:01.881292  165060 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:01.881316  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:02.000026  165060 logs.go:123] Gathering logs for etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] ...
	I0617 12:06:02.000063  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:02.043491  165060 logs.go:123] Gathering logs for coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] ...
	I0617 12:06:02.043524  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:02.081957  165060 logs.go:123] Gathering logs for kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] ...
	I0617 12:06:02.081984  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:05:59.835769  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:02.332739  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:03.080903  165698 out.go:204]   - Generating certificates and keys ...
	I0617 12:06:03.081006  165698 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:06:03.081080  165698 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:06:03.081168  165698 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:06:03.081250  165698 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:06:03.081377  165698 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:06:03.081457  165698 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:06:03.082418  165698 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:06:03.083003  165698 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:06:03.083917  165698 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:06:03.084820  165698 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:06:03.085224  165698 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:06:03.085307  165698 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:06:03.203342  165698 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:06:03.430428  165698 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:06:03.570422  165698 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:06:03.772092  165698 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:06:03.793105  165698 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:06:03.793206  165698 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:06:03.793261  165698 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:06:03.919738  165698 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:06:04.333408  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:06.333963  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:03.921593  165698 out.go:204]   - Booting up control plane ...
	I0617 12:06:03.921708  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:06:03.928168  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:06:03.928279  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:06:03.937197  165698 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:06:03.939967  165698 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 12:06:04.644102  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:06:04.648733  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 200:
	ok
	I0617 12:06:04.649862  165060 api_server.go:141] control plane version: v1.30.1
	I0617 12:06:04.649894  165060 api_server.go:131] duration metric: took 3.86829173s to wait for apiserver health ...
	I0617 12:06:04.649905  165060 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:06:04.649936  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:04.649997  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:04.688904  165060 cri.go:89] found id: "5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:04.688923  165060 cri.go:89] found id: ""
	I0617 12:06:04.688931  165060 logs.go:276] 1 containers: [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3]
	I0617 12:06:04.688975  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.695049  165060 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:04.695110  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:04.730292  165060 cri.go:89] found id: "fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:04.730314  165060 cri.go:89] found id: ""
	I0617 12:06:04.730322  165060 logs.go:276] 1 containers: [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9]
	I0617 12:06:04.730373  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.734432  165060 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:04.734486  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:04.771401  165060 cri.go:89] found id: "c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:04.771418  165060 cri.go:89] found id: ""
	I0617 12:06:04.771426  165060 logs.go:276] 1 containers: [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7]
	I0617 12:06:04.771496  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.775822  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:04.775876  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:04.816111  165060 cri.go:89] found id: "157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:04.816131  165060 cri.go:89] found id: ""
	I0617 12:06:04.816139  165060 logs.go:276] 1 containers: [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d]
	I0617 12:06:04.816185  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.820614  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:04.820672  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:04.865387  165060 cri.go:89] found id: "c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:04.865411  165060 cri.go:89] found id: ""
	I0617 12:06:04.865421  165060 logs.go:276] 1 containers: [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d]
	I0617 12:06:04.865479  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.870192  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:04.870263  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:04.912698  165060 cri.go:89] found id: "2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:06:04.912723  165060 cri.go:89] found id: ""
	I0617 12:06:04.912734  165060 logs.go:276] 1 containers: [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079]
	I0617 12:06:04.912796  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.917484  165060 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:04.917563  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:04.954076  165060 cri.go:89] found id: ""
	I0617 12:06:04.954109  165060 logs.go:276] 0 containers: []
	W0617 12:06:04.954120  165060 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:04.954129  165060 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:04.954196  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:04.995832  165060 cri.go:89] found id: "02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:04.995858  165060 cri.go:89] found id: "7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:04.995862  165060 cri.go:89] found id: ""
	I0617 12:06:04.995869  165060 logs.go:276] 2 containers: [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36]
	I0617 12:06:04.995928  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:05.000741  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:05.004995  165060 logs.go:123] Gathering logs for storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] ...
	I0617 12:06:05.005026  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:05.040651  165060 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:05.040692  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:05.461644  165060 logs.go:123] Gathering logs for container status ...
	I0617 12:06:05.461685  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:05.508706  165060 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:05.508733  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:05.562418  165060 logs.go:123] Gathering logs for kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] ...
	I0617 12:06:05.562461  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:05.606489  165060 logs.go:123] Gathering logs for etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] ...
	I0617 12:06:05.606527  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:05.651719  165060 logs.go:123] Gathering logs for coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] ...
	I0617 12:06:05.651753  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:05.688736  165060 logs.go:123] Gathering logs for kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] ...
	I0617 12:06:05.688772  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:05.730649  165060 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:05.730679  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:05.745482  165060 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:05.745511  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:05.849002  165060 logs.go:123] Gathering logs for kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] ...
	I0617 12:06:05.849025  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:05.890802  165060 logs.go:123] Gathering logs for kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] ...
	I0617 12:06:05.890836  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:06:05.946444  165060 logs.go:123] Gathering logs for storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] ...
	I0617 12:06:05.946474  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:04.332977  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:06.834683  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:08.489561  165060 system_pods.go:59] 8 kube-system pods found
	I0617 12:06:08.489593  165060 system_pods.go:61] "coredns-7db6d8ff4d-9bbjg" [1ba0eee5-436e-4c83-b5ce-3c907d66b641] Running
	I0617 12:06:08.489597  165060 system_pods.go:61] "etcd-embed-certs-136195" [6dc81a80-c56b-4517-af82-c450cf9578f5] Running
	I0617 12:06:08.489601  165060 system_pods.go:61] "kube-apiserver-embed-certs-136195" [bd61a715-2471-4dca-aa48-a157531ebd6b] Running
	I0617 12:06:08.489605  165060 system_pods.go:61] "kube-controller-manager-embed-certs-136195" [194db4b0-75c2-4905-8e4d-813185497b51] Running
	I0617 12:06:08.489607  165060 system_pods.go:61] "kube-proxy-25d5n" [52b6d09a-899f-40c4-b1f3-7842ae755165] Running
	I0617 12:06:08.489610  165060 system_pods.go:61] "kube-scheduler-embed-certs-136195" [b04d3798-f465-4f82-9ec7-777ea62d5b94] Running
	I0617 12:06:08.489616  165060 system_pods.go:61] "metrics-server-569cc877fc-dmhfs" [31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:08.489620  165060 system_pods.go:61] "storage-provisioner" [4b04a38a-5006-4496-a24d-0940029193de] Running
	I0617 12:06:08.489626  165060 system_pods.go:74] duration metric: took 3.839715717s to wait for pod list to return data ...
	I0617 12:06:08.489633  165060 default_sa.go:34] waiting for default service account to be created ...
	I0617 12:06:08.491984  165060 default_sa.go:45] found service account: "default"
	I0617 12:06:08.492007  165060 default_sa.go:55] duration metric: took 2.365306ms for default service account to be created ...
	I0617 12:06:08.492014  165060 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 12:06:08.497834  165060 system_pods.go:86] 8 kube-system pods found
	I0617 12:06:08.497865  165060 system_pods.go:89] "coredns-7db6d8ff4d-9bbjg" [1ba0eee5-436e-4c83-b5ce-3c907d66b641] Running
	I0617 12:06:08.497873  165060 system_pods.go:89] "etcd-embed-certs-136195" [6dc81a80-c56b-4517-af82-c450cf9578f5] Running
	I0617 12:06:08.497880  165060 system_pods.go:89] "kube-apiserver-embed-certs-136195" [bd61a715-2471-4dca-aa48-a157531ebd6b] Running
	I0617 12:06:08.497887  165060 system_pods.go:89] "kube-controller-manager-embed-certs-136195" [194db4b0-75c2-4905-8e4d-813185497b51] Running
	I0617 12:06:08.497891  165060 system_pods.go:89] "kube-proxy-25d5n" [52b6d09a-899f-40c4-b1f3-7842ae755165] Running
	I0617 12:06:08.497899  165060 system_pods.go:89] "kube-scheduler-embed-certs-136195" [b04d3798-f465-4f82-9ec7-777ea62d5b94] Running
	I0617 12:06:08.497905  165060 system_pods.go:89] "metrics-server-569cc877fc-dmhfs" [31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:08.497914  165060 system_pods.go:89] "storage-provisioner" [4b04a38a-5006-4496-a24d-0940029193de] Running
	I0617 12:06:08.497921  165060 system_pods.go:126] duration metric: took 5.901391ms to wait for k8s-apps to be running ...
	I0617 12:06:08.497927  165060 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 12:06:08.497970  165060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:08.520136  165060 system_svc.go:56] duration metric: took 22.203601ms WaitForService to wait for kubelet
	I0617 12:06:08.520159  165060 kubeadm.go:576] duration metric: took 4m22.614222011s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:06:08.520178  165060 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:06:08.522704  165060 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:06:08.522741  165060 node_conditions.go:123] node cpu capacity is 2
	I0617 12:06:08.522758  165060 node_conditions.go:105] duration metric: took 2.57391ms to run NodePressure ...
	I0617 12:06:08.522773  165060 start.go:240] waiting for startup goroutines ...
	I0617 12:06:08.522787  165060 start.go:245] waiting for cluster config update ...
	I0617 12:06:08.522803  165060 start.go:254] writing updated cluster config ...
	I0617 12:06:08.523139  165060 ssh_runner.go:195] Run: rm -f paused
	I0617 12:06:08.577942  165060 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 12:06:08.579946  165060 out.go:177] * Done! kubectl is now configured to use "embed-certs-136195" cluster and "default" namespace by default
	I0617 12:06:08.334463  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:10.335642  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:09.331628  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:11.332586  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:13.332703  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:12.834827  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:15.334721  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:15.333004  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:17.834357  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:17.833756  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:19.835364  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:22.333742  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:20.332127  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:22.832111  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:24.333945  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:26.335021  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:25.332366  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:27.835364  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:28.833758  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:31.334155  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:29.835500  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:32.332236  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:33.833599  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:35.834190  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:34.831122  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:36.833202  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:38.334352  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:40.335399  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:40.335423  166103 pod_ready.go:81] duration metric: took 4m0.008367222s for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	E0617 12:06:40.335433  166103 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0617 12:06:40.335441  166103 pod_ready.go:38] duration metric: took 4m7.419505963s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:06:40.335475  166103 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:06:40.335505  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:40.335556  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:40.400354  166103 cri.go:89] found id: "5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:40.400384  166103 cri.go:89] found id: ""
	I0617 12:06:40.400394  166103 logs.go:276] 1 containers: [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b]
	I0617 12:06:40.400453  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.405124  166103 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:40.405186  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:40.440583  166103 cri.go:89] found id: "8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:40.440610  166103 cri.go:89] found id: ""
	I0617 12:06:40.440619  166103 logs.go:276] 1 containers: [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862]
	I0617 12:06:40.440665  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.445086  166103 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:40.445141  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:40.489676  166103 cri.go:89] found id: "26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:40.489698  166103 cri.go:89] found id: ""
	I0617 12:06:40.489706  166103 logs.go:276] 1 containers: [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323]
	I0617 12:06:40.489752  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.494402  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:40.494514  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:40.535486  166103 cri.go:89] found id: "2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:40.535517  166103 cri.go:89] found id: ""
	I0617 12:06:40.535527  166103 logs.go:276] 1 containers: [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b]
	I0617 12:06:40.535589  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.543265  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:40.543330  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:40.579564  166103 cri.go:89] found id: "63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:40.579588  166103 cri.go:89] found id: ""
	I0617 12:06:40.579598  166103 logs.go:276] 1 containers: [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da]
	I0617 12:06:40.579658  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.583865  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:40.583928  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:40.642408  166103 cri.go:89] found id: "36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:40.642435  166103 cri.go:89] found id: ""
	I0617 12:06:40.642445  166103 logs.go:276] 1 containers: [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685]
	I0617 12:06:40.642509  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.647892  166103 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:40.647959  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:40.698654  166103 cri.go:89] found id: ""
	I0617 12:06:40.698686  166103 logs.go:276] 0 containers: []
	W0617 12:06:40.698696  166103 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:40.698704  166103 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:40.698768  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:40.749641  166103 cri.go:89] found id: "adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:40.749663  166103 cri.go:89] found id: "e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:40.749668  166103 cri.go:89] found id: ""
	I0617 12:06:40.749678  166103 logs.go:276] 2 containers: [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc]
	I0617 12:06:40.749742  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.754926  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.760126  166103 logs.go:123] Gathering logs for container status ...
	I0617 12:06:40.760152  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:40.804119  166103 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:40.804159  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:40.942459  166103 logs.go:123] Gathering logs for etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] ...
	I0617 12:06:40.942495  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:40.994721  166103 logs.go:123] Gathering logs for coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] ...
	I0617 12:06:40.994761  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:41.037005  166103 logs.go:123] Gathering logs for kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] ...
	I0617 12:06:41.037040  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:41.080715  166103 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:41.080751  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:41.606478  166103 logs.go:123] Gathering logs for storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] ...
	I0617 12:06:41.606516  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:41.643963  166103 logs.go:123] Gathering logs for storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] ...
	I0617 12:06:41.644003  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:41.683405  166103 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:41.683443  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:41.737365  166103 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:41.737400  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:41.752552  166103 logs.go:123] Gathering logs for kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] ...
	I0617 12:06:41.752582  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:41.804447  166103 logs.go:123] Gathering logs for kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] ...
	I0617 12:06:41.804480  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:41.847266  166103 logs.go:123] Gathering logs for kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] ...
	I0617 12:06:41.847302  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:39.333111  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:41.836327  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:44.408776  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:44.427500  166103 api_server.go:72] duration metric: took 4m19.25316479s to wait for apiserver process to appear ...
	I0617 12:06:44.427531  166103 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:06:44.427577  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:44.427634  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:44.466379  166103 cri.go:89] found id: "5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:44.466408  166103 cri.go:89] found id: ""
	I0617 12:06:44.466418  166103 logs.go:276] 1 containers: [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b]
	I0617 12:06:44.466481  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.470832  166103 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:44.470901  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:44.511689  166103 cri.go:89] found id: "8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:44.511713  166103 cri.go:89] found id: ""
	I0617 12:06:44.511722  166103 logs.go:276] 1 containers: [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862]
	I0617 12:06:44.511769  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.516221  166103 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:44.516303  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:44.560612  166103 cri.go:89] found id: "26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:44.560634  166103 cri.go:89] found id: ""
	I0617 12:06:44.560642  166103 logs.go:276] 1 containers: [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323]
	I0617 12:06:44.560695  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.564998  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:44.565068  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:44.600133  166103 cri.go:89] found id: "2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:44.600155  166103 cri.go:89] found id: ""
	I0617 12:06:44.600164  166103 logs.go:276] 1 containers: [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b]
	I0617 12:06:44.600220  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.605431  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:44.605494  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:44.648647  166103 cri.go:89] found id: "63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:44.648678  166103 cri.go:89] found id: ""
	I0617 12:06:44.648688  166103 logs.go:276] 1 containers: [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da]
	I0617 12:06:44.648758  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.653226  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:44.653307  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:44.701484  166103 cri.go:89] found id: "36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:44.701508  166103 cri.go:89] found id: ""
	I0617 12:06:44.701516  166103 logs.go:276] 1 containers: [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685]
	I0617 12:06:44.701572  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.707827  166103 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:44.707890  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:44.752362  166103 cri.go:89] found id: ""
	I0617 12:06:44.752391  166103 logs.go:276] 0 containers: []
	W0617 12:06:44.752402  166103 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:44.752410  166103 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:44.752473  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:44.798926  166103 cri.go:89] found id: "adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:44.798955  166103 cri.go:89] found id: "e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:44.798961  166103 cri.go:89] found id: ""
	I0617 12:06:44.798970  166103 logs.go:276] 2 containers: [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc]
	I0617 12:06:44.799038  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.804702  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.810673  166103 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:44.810702  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:44.939596  166103 logs.go:123] Gathering logs for etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] ...
	I0617 12:06:44.939627  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:44.987902  166103 logs.go:123] Gathering logs for coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] ...
	I0617 12:06:44.987936  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:45.023931  166103 logs.go:123] Gathering logs for kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] ...
	I0617 12:06:45.023962  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:45.060432  166103 logs.go:123] Gathering logs for storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] ...
	I0617 12:06:45.060468  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:45.095643  166103 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:45.095679  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:45.553973  166103 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:45.554018  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:45.611997  166103 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:45.612036  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:45.626973  166103 logs.go:123] Gathering logs for container status ...
	I0617 12:06:45.627002  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:45.671119  166103 logs.go:123] Gathering logs for kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] ...
	I0617 12:06:45.671151  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:45.728097  166103 logs.go:123] Gathering logs for storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] ...
	I0617 12:06:45.728133  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:45.765586  166103 logs.go:123] Gathering logs for kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] ...
	I0617 12:06:45.765615  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:45.818347  166103 logs.go:123] Gathering logs for kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] ...
	I0617 12:06:45.818387  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:43.941225  165698 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0617 12:06:43.941341  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:43.941612  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:06:44.331481  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:46.831820  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:48.362826  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:06:48.366936  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 200:
	ok
	I0617 12:06:48.367973  166103 api_server.go:141] control plane version: v1.30.1
	I0617 12:06:48.367992  166103 api_server.go:131] duration metric: took 3.940452539s to wait for apiserver health ...
	I0617 12:06:48.367999  166103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:06:48.368021  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:48.368066  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:48.404797  166103 cri.go:89] found id: "5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:48.404819  166103 cri.go:89] found id: ""
	I0617 12:06:48.404828  166103 logs.go:276] 1 containers: [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b]
	I0617 12:06:48.404887  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.409105  166103 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:48.409162  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:48.456233  166103 cri.go:89] found id: "8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:48.456266  166103 cri.go:89] found id: ""
	I0617 12:06:48.456277  166103 logs.go:276] 1 containers: [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862]
	I0617 12:06:48.456336  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.460550  166103 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:48.460625  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:48.498447  166103 cri.go:89] found id: "26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:48.498472  166103 cri.go:89] found id: ""
	I0617 12:06:48.498481  166103 logs.go:276] 1 containers: [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323]
	I0617 12:06:48.498564  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.503826  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:48.503906  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:48.554405  166103 cri.go:89] found id: "2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:48.554435  166103 cri.go:89] found id: ""
	I0617 12:06:48.554446  166103 logs.go:276] 1 containers: [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b]
	I0617 12:06:48.554504  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.559175  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:48.559240  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:48.596764  166103 cri.go:89] found id: "63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:48.596791  166103 cri.go:89] found id: ""
	I0617 12:06:48.596801  166103 logs.go:276] 1 containers: [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da]
	I0617 12:06:48.596863  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.601197  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:48.601260  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:48.654027  166103 cri.go:89] found id: "36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:48.654053  166103 cri.go:89] found id: ""
	I0617 12:06:48.654061  166103 logs.go:276] 1 containers: [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685]
	I0617 12:06:48.654113  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.659492  166103 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:48.659557  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:48.706749  166103 cri.go:89] found id: ""
	I0617 12:06:48.706777  166103 logs.go:276] 0 containers: []
	W0617 12:06:48.706786  166103 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:48.706794  166103 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:48.706859  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:48.750556  166103 cri.go:89] found id: "adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:48.750588  166103 cri.go:89] found id: "e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:48.750594  166103 cri.go:89] found id: ""
	I0617 12:06:48.750607  166103 logs.go:276] 2 containers: [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc]
	I0617 12:06:48.750671  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.755368  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.760128  166103 logs.go:123] Gathering logs for kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] ...
	I0617 12:06:48.760154  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:48.802187  166103 logs.go:123] Gathering logs for etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] ...
	I0617 12:06:48.802224  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:48.861041  166103 logs.go:123] Gathering logs for kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] ...
	I0617 12:06:48.861076  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:48.917864  166103 logs.go:123] Gathering logs for storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] ...
	I0617 12:06:48.917902  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:48.963069  166103 logs.go:123] Gathering logs for container status ...
	I0617 12:06:48.963099  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:49.012109  166103 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:49.012149  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:49.119880  166103 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:49.119915  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:49.136461  166103 logs.go:123] Gathering logs for coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] ...
	I0617 12:06:49.136497  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:49.177339  166103 logs.go:123] Gathering logs for kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] ...
	I0617 12:06:49.177377  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:49.219101  166103 logs.go:123] Gathering logs for kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] ...
	I0617 12:06:49.219135  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:49.256646  166103 logs.go:123] Gathering logs for storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] ...
	I0617 12:06:49.256687  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:49.302208  166103 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:49.302243  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:49.653713  166103 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:49.653758  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:52.217069  166103 system_pods.go:59] 8 kube-system pods found
	I0617 12:06:52.217102  166103 system_pods.go:61] "coredns-7db6d8ff4d-mnw24" [1e6c4ff3-f0dc-43da-abd8-baaed7dca40c] Running
	I0617 12:06:52.217107  166103 system_pods.go:61] "etcd-default-k8s-diff-port-991309" [820a4f27-cf83-4edb-a2ea-edba6673d851] Running
	I0617 12:06:52.217111  166103 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991309" [26e6c19d-6f70-4924-83f5-563c8508c9e3] Running
	I0617 12:06:52.217115  166103 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991309" [01e7c468-98a6-48f3-a158-59e97fa8279c] Running
	I0617 12:06:52.217119  166103 system_pods.go:61] "kube-proxy-jn5kp" [d6935148-7ee8-4655-8327-9f1ee4c933de] Running
	I0617 12:06:52.217122  166103 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991309" [53ecd22c-05cf-48a5-b7e5-925392085f7a] Running
	I0617 12:06:52.217128  166103 system_pods.go:61] "metrics-server-569cc877fc-n2svp" [5b637d97-3183-4324-98cf-dd69a2968578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:52.217134  166103 system_pods.go:61] "storage-provisioner" [92b20aec-29c2-4256-86be-7f58f66585dd] Running
	I0617 12:06:52.217145  166103 system_pods.go:74] duration metric: took 3.849140024s to wait for pod list to return data ...
	I0617 12:06:52.217152  166103 default_sa.go:34] waiting for default service account to be created ...
	I0617 12:06:52.219308  166103 default_sa.go:45] found service account: "default"
	I0617 12:06:52.219330  166103 default_sa.go:55] duration metric: took 2.172323ms for default service account to be created ...
	I0617 12:06:52.219339  166103 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 12:06:52.224239  166103 system_pods.go:86] 8 kube-system pods found
	I0617 12:06:52.224265  166103 system_pods.go:89] "coredns-7db6d8ff4d-mnw24" [1e6c4ff3-f0dc-43da-abd8-baaed7dca40c] Running
	I0617 12:06:52.224270  166103 system_pods.go:89] "etcd-default-k8s-diff-port-991309" [820a4f27-cf83-4edb-a2ea-edba6673d851] Running
	I0617 12:06:52.224276  166103 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-991309" [26e6c19d-6f70-4924-83f5-563c8508c9e3] Running
	I0617 12:06:52.224280  166103 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-991309" [01e7c468-98a6-48f3-a158-59e97fa8279c] Running
	I0617 12:06:52.224284  166103 system_pods.go:89] "kube-proxy-jn5kp" [d6935148-7ee8-4655-8327-9f1ee4c933de] Running
	I0617 12:06:52.224288  166103 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-991309" [53ecd22c-05cf-48a5-b7e5-925392085f7a] Running
	I0617 12:06:52.224299  166103 system_pods.go:89] "metrics-server-569cc877fc-n2svp" [5b637d97-3183-4324-98cf-dd69a2968578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:52.224305  166103 system_pods.go:89] "storage-provisioner" [92b20aec-29c2-4256-86be-7f58f66585dd] Running
	I0617 12:06:52.224319  166103 system_pods.go:126] duration metric: took 4.973603ms to wait for k8s-apps to be running ...
	I0617 12:06:52.224332  166103 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 12:06:52.224380  166103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:52.241121  166103 system_svc.go:56] duration metric: took 16.776061ms WaitForService to wait for kubelet
	I0617 12:06:52.241156  166103 kubeadm.go:576] duration metric: took 4m27.066827271s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:06:52.241181  166103 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:06:52.245359  166103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:06:52.245407  166103 node_conditions.go:123] node cpu capacity is 2
	I0617 12:06:52.245423  166103 node_conditions.go:105] duration metric: took 4.235898ms to run NodePressure ...
	I0617 12:06:52.245440  166103 start.go:240] waiting for startup goroutines ...
	I0617 12:06:52.245449  166103 start.go:245] waiting for cluster config update ...
	I0617 12:06:52.245462  166103 start.go:254] writing updated cluster config ...
	I0617 12:06:52.245969  166103 ssh_runner.go:195] Run: rm -f paused
	I0617 12:06:52.299326  166103 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 12:06:52.301413  166103 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-991309" cluster and "default" namespace by default
	I0617 12:06:48.942159  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:48.942434  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:06:48.835113  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:51.331395  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:53.331551  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:55.332455  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:57.835143  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:58.942977  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:58.943290  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:00.331823  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:02.332214  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:04.831284  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:06.832082  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:07.325414  164809 pod_ready.go:81] duration metric: took 4m0.000322555s for pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace to be "Ready" ...
	E0617 12:07:07.325446  164809 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0617 12:07:07.325464  164809 pod_ready.go:38] duration metric: took 4m12.035995337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:07:07.325494  164809 kubeadm.go:591] duration metric: took 4m19.041266463s to restartPrimaryControlPlane
	W0617 12:07:07.325556  164809 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0617 12:07:07.325587  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:07:18.944149  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:07:18.944368  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:38.980378  164809 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.654762508s)
	I0617 12:07:38.980451  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:07:38.997845  164809 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:07:39.009456  164809 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:07:39.020407  164809 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:07:39.020430  164809 kubeadm.go:156] found existing configuration files:
	
	I0617 12:07:39.020472  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:07:39.030323  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:07:39.030376  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:07:39.040298  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:07:39.049715  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:07:39.049757  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:07:39.060493  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:07:39.069921  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:07:39.069973  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:07:39.080049  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:07:39.089524  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:07:39.089569  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:07:39.099082  164809 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:07:39.154963  164809 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0617 12:07:39.155083  164809 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:07:39.286616  164809 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:07:39.286809  164809 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:07:39.286977  164809 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:07:39.487542  164809 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:07:39.489554  164809 out.go:204]   - Generating certificates and keys ...
	I0617 12:07:39.489665  164809 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:07:39.489732  164809 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:07:39.489855  164809 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:07:39.489969  164809 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:07:39.490088  164809 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:07:39.490187  164809 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:07:39.490274  164809 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:07:39.490386  164809 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:07:39.490508  164809 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:07:39.490643  164809 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:07:39.490750  164809 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:07:39.490849  164809 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:07:39.565788  164809 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:07:39.643443  164809 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0617 12:07:39.765615  164809 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:07:39.851182  164809 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:07:40.041938  164809 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:07:40.042576  164809 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:07:40.045112  164809 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:07:40.047144  164809 out.go:204]   - Booting up control plane ...
	I0617 12:07:40.047265  164809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:07:40.047374  164809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:07:40.047995  164809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:07:40.070163  164809 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:07:40.071308  164809 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:07:40.071415  164809 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:07:40.204578  164809 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0617 12:07:40.204698  164809 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0617 12:07:41.210782  164809 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.0065421s
	I0617 12:07:41.210902  164809 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0617 12:07:45.713194  164809 kubeadm.go:309] [api-check] The API server is healthy after 4.501871798s
	I0617 12:07:45.735311  164809 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0617 12:07:45.760405  164809 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0617 12:07:45.795429  164809 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0617 12:07:45.795770  164809 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-152830 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0617 12:07:45.816446  164809 kubeadm.go:309] [bootstrap-token] Using token: ryfqxd.olkegn8a1unpvnbq
	I0617 12:07:45.817715  164809 out.go:204]   - Configuring RBAC rules ...
	I0617 12:07:45.817890  164809 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0617 12:07:45.826422  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0617 12:07:45.852291  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0617 12:07:45.867538  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0617 12:07:45.880697  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0617 12:07:45.887707  164809 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0617 12:07:46.120211  164809 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0617 12:07:46.593168  164809 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0617 12:07:47.119377  164809 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0617 12:07:47.120840  164809 kubeadm.go:309] 
	I0617 12:07:47.120933  164809 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0617 12:07:47.120947  164809 kubeadm.go:309] 
	I0617 12:07:47.121057  164809 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0617 12:07:47.121069  164809 kubeadm.go:309] 
	I0617 12:07:47.121123  164809 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0617 12:07:47.124361  164809 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0617 12:07:47.124443  164809 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0617 12:07:47.124464  164809 kubeadm.go:309] 
	I0617 12:07:47.124538  164809 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0617 12:07:47.124550  164809 kubeadm.go:309] 
	I0617 12:07:47.124607  164809 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0617 12:07:47.124617  164809 kubeadm.go:309] 
	I0617 12:07:47.124724  164809 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0617 12:07:47.124838  164809 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0617 12:07:47.124938  164809 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0617 12:07:47.124949  164809 kubeadm.go:309] 
	I0617 12:07:47.125085  164809 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0617 12:07:47.125191  164809 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0617 12:07:47.125203  164809 kubeadm.go:309] 
	I0617 12:07:47.125343  164809 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ryfqxd.olkegn8a1unpvnbq \
	I0617 12:07:47.125479  164809 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 \
	I0617 12:07:47.125510  164809 kubeadm.go:309] 	--control-plane 
	I0617 12:07:47.125518  164809 kubeadm.go:309] 
	I0617 12:07:47.125616  164809 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0617 12:07:47.125627  164809 kubeadm.go:309] 
	I0617 12:07:47.125724  164809 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ryfqxd.olkegn8a1unpvnbq \
	I0617 12:07:47.125852  164809 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 
	I0617 12:07:47.126915  164809 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:07:47.126966  164809 cni.go:84] Creating CNI manager for ""
	I0617 12:07:47.126983  164809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:07:47.128899  164809 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:07:47.130229  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:07:47.142301  164809 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:07:47.163380  164809 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:07:47.163500  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:47.163503  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-152830 minikube.k8s.io/updated_at=2024_06_17T12_07_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6 minikube.k8s.io/name=no-preload-152830 minikube.k8s.io/primary=true
	I0617 12:07:47.375089  164809 ops.go:34] apiserver oom_adj: -16
	I0617 12:07:47.375266  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:47.875477  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:48.375626  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:48.876185  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:49.375621  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:49.875597  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:50.376188  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:50.875983  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:51.375537  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:51.876321  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:52.375920  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:52.876348  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:53.375623  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:53.875369  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:54.375747  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:54.875581  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:55.376244  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:55.875866  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:56.376285  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:56.876228  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:57.375990  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:57.875392  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:58.946943  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:07:58.947220  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:58.947233  165698 kubeadm.go:309] 
	I0617 12:07:58.947316  165698 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0617 12:07:58.947393  165698 kubeadm.go:309] 		timed out waiting for the condition
	I0617 12:07:58.947406  165698 kubeadm.go:309] 
	I0617 12:07:58.947449  165698 kubeadm.go:309] 	This error is likely caused by:
	I0617 12:07:58.947528  165698 kubeadm.go:309] 		- The kubelet is not running
	I0617 12:07:58.947690  165698 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0617 12:07:58.947699  165698 kubeadm.go:309] 
	I0617 12:07:58.947860  165698 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0617 12:07:58.947924  165698 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0617 12:07:58.947976  165698 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0617 12:07:58.947991  165698 kubeadm.go:309] 
	I0617 12:07:58.948132  165698 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0617 12:07:58.948247  165698 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0617 12:07:58.948260  165698 kubeadm.go:309] 
	I0617 12:07:58.948406  165698 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0617 12:07:58.948539  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0617 12:07:58.948639  165698 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0617 12:07:58.948740  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0617 12:07:58.948750  165698 kubeadm.go:309] 
	I0617 12:07:58.949270  165698 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:07:58.949403  165698 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0617 12:07:58.949508  165698 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0617 12:07:58.949630  165698 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0617 12:07:58.949694  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:07:59.418622  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:07:59.435367  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:07:59.449365  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:07:59.449384  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:07:59.449430  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:07:59.461411  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:07:59.461478  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:07:59.471262  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:07:59.480591  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:07:59.480640  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:07:59.490152  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:07:59.499248  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:07:59.499300  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:07:59.508891  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:07:59.518114  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:07:59.518152  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:07:59.528190  165698 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:07:59.592831  165698 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0617 12:07:59.592949  165698 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:07:59.752802  165698 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:07:59.752947  165698 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:07:59.753079  165698 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:07:59.984221  165698 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:07:58.375522  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:58.876221  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:59.375941  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:59.875924  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:08:00.063788  164809 kubeadm.go:1107] duration metric: took 12.900376954s to wait for elevateKubeSystemPrivileges
	W0617 12:08:00.063860  164809 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0617 12:08:00.063871  164809 kubeadm.go:393] duration metric: took 5m11.831587226s to StartCluster
	I0617 12:08:00.063895  164809 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:08:00.063996  164809 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:08:00.066593  164809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:08:00.066922  164809 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:08:00.068556  164809 out.go:177] * Verifying Kubernetes components...
	I0617 12:08:00.067029  164809 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:08:00.067131  164809 config.go:182] Loaded profile config "no-preload-152830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:08:00.069969  164809 addons.go:69] Setting storage-provisioner=true in profile "no-preload-152830"
	I0617 12:08:00.069983  164809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:08:00.069992  164809 addons.go:69] Setting metrics-server=true in profile "no-preload-152830"
	I0617 12:08:00.070015  164809 addons.go:234] Setting addon metrics-server=true in "no-preload-152830"
	I0617 12:08:00.070014  164809 addons.go:234] Setting addon storage-provisioner=true in "no-preload-152830"
	W0617 12:08:00.070021  164809 addons.go:243] addon metrics-server should already be in state true
	W0617 12:08:00.070024  164809 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:08:00.070055  164809 host.go:66] Checking if "no-preload-152830" exists ...
	I0617 12:08:00.070057  164809 host.go:66] Checking if "no-preload-152830" exists ...
	I0617 12:08:00.069984  164809 addons.go:69] Setting default-storageclass=true in profile "no-preload-152830"
	I0617 12:08:00.070116  164809 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-152830"
	I0617 12:08:00.070426  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.070428  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.070443  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.070451  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.070475  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.070494  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.088451  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I0617 12:08:00.089105  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.089673  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.089700  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.090074  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.090673  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.090723  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.091118  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0617 12:08:00.091150  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I0617 12:08:00.091756  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.091880  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.092306  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.092327  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.092470  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.092487  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.093006  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.093081  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.093169  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.093683  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.093722  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.096819  164809 addons.go:234] Setting addon default-storageclass=true in "no-preload-152830"
	W0617 12:08:00.096839  164809 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:08:00.096868  164809 host.go:66] Checking if "no-preload-152830" exists ...
	I0617 12:08:00.097223  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.097252  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.110063  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33623
	I0617 12:08:00.110843  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.111489  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.111509  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.112419  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.112633  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.112859  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39555
	I0617 12:08:00.113245  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.113927  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.113946  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.114470  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.114758  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:08:00.116377  164809 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:08:00.115146  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.117266  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I0617 12:08:00.117647  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:08:00.117663  164809 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:08:00.117674  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.117681  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:08:00.118504  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.119076  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.119091  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.119440  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.119755  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.121396  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.121620  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:08:00.123146  164809 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:07:59.986165  165698 out.go:204]   - Generating certificates and keys ...
	I0617 12:07:59.986270  165698 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:07:59.986391  165698 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:07:59.986522  165698 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:07:59.986606  165698 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:07:59.986717  165698 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:07:59.986795  165698 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:07:59.986887  165698 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:07:59.986972  165698 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:07:59.987081  165698 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:07:59.987191  165698 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:07:59.987250  165698 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:07:59.987331  165698 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:08:00.155668  165698 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:08:00.303780  165698 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:08:00.369907  165698 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:08:00.506550  165698 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:08:00.529943  165698 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:08:00.531684  165698 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:08:00.531756  165698 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:08:00.667972  165698 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:08:00.122003  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:08:00.122146  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:08:00.124748  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.124895  164809 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:08:00.124914  164809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:08:00.124934  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:08:00.124957  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:08:00.125142  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:08:00.125446  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:08:00.128559  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.128991  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:08:00.129011  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.129239  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:08:00.129434  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:08:00.129537  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:08:00.129640  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:08:00.142435  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39073
	I0617 12:08:00.142915  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.143550  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.143583  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.143946  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.144168  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.145972  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:08:00.146165  164809 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:08:00.146178  164809 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:08:00.146196  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:08:00.149316  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.149720  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:08:00.149743  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.149926  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:08:00.150106  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:08:00.150273  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:08:00.150434  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:08:00.294731  164809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:08:00.317727  164809 node_ready.go:35] waiting up to 6m0s for node "no-preload-152830" to be "Ready" ...
	I0617 12:08:00.346507  164809 node_ready.go:49] node "no-preload-152830" has status "Ready":"True"
	I0617 12:08:00.346533  164809 node_ready.go:38] duration metric: took 28.776898ms for node "no-preload-152830" to be "Ready" ...
	I0617 12:08:00.346544  164809 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:08:00.404097  164809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:00.412303  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:08:00.412325  164809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:08:00.415269  164809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:08:00.438024  164809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:08:00.514528  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:08:00.514561  164809 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:08:00.629109  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:08:00.629141  164809 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:08:00.677084  164809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:08:01.113979  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.114007  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.114432  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.114445  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.114507  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.114526  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.114536  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.114846  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.114866  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.117124  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.117141  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.117437  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.117457  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.117478  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.117496  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.117508  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.117821  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.117858  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.117882  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.125648  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.125668  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.125998  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.126020  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.126030  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.325217  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.325242  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.325579  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.325633  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.325669  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.325669  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.325682  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.325960  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.325977  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.326007  164809 addons.go:475] Verifying addon metrics-server=true in "no-preload-152830"
	I0617 12:08:01.326037  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.327744  164809 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0617 12:08:00.671036  165698 out.go:204]   - Booting up control plane ...
	I0617 12:08:00.671171  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:08:00.677241  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:08:00.678999  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:08:00.681119  165698 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:08:00.684535  165698 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 12:08:01.329155  164809 addons.go:510] duration metric: took 1.262127108s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0617 12:08:02.425731  164809 pod_ready.go:102] pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace has status "Ready":"False"
	I0617 12:08:03.910467  164809 pod_ready.go:92] pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.910494  164809 pod_ready.go:81] duration metric: took 3.506370946s for pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.910508  164809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vz7dg" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.916309  164809 pod_ready.go:92] pod "coredns-7db6d8ff4d-vz7dg" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.916331  164809 pod_ready.go:81] duration metric: took 5.814812ms for pod "coredns-7db6d8ff4d-vz7dg" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.916340  164809 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.920834  164809 pod_ready.go:92] pod "etcd-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.920862  164809 pod_ready.go:81] duration metric: took 4.51438ms for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.920874  164809 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.924955  164809 pod_ready.go:92] pod "kube-apiserver-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.924973  164809 pod_ready.go:81] duration metric: took 4.09301ms for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.924982  164809 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.929301  164809 pod_ready.go:92] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.929318  164809 pod_ready.go:81] duration metric: took 4.33061ms for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.929326  164809 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:04.308546  164809 pod_ready.go:92] pod "kube-scheduler-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:04.308570  164809 pod_ready.go:81] duration metric: took 379.237147ms for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:04.308578  164809 pod_ready.go:38] duration metric: took 3.962022714s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:08:04.308594  164809 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:08:04.308644  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:08:04.327383  164809 api_server.go:72] duration metric: took 4.260420928s to wait for apiserver process to appear ...
	I0617 12:08:04.327408  164809 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:08:04.327426  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:08:04.332321  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0617 12:08:04.333390  164809 api_server.go:141] control plane version: v1.30.1
	I0617 12:08:04.333412  164809 api_server.go:131] duration metric: took 5.998312ms to wait for apiserver health ...
	I0617 12:08:04.333420  164809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:08:04.512267  164809 system_pods.go:59] 9 kube-system pods found
	I0617 12:08:04.512298  164809 system_pods.go:61] "coredns-7db6d8ff4d-gjt84" [979c7339-3a4c-4bc8-8586-4d9da42339ae] Running
	I0617 12:08:04.512302  164809 system_pods.go:61] "coredns-7db6d8ff4d-vz7dg" [53c5188e-bc44-4aed-a989-ef3e2379c27b] Running
	I0617 12:08:04.512306  164809 system_pods.go:61] "etcd-no-preload-152830" [2b82d709-0776-470a-a538-f132b84be2e0] Running
	I0617 12:08:04.512310  164809 system_pods.go:61] "kube-apiserver-no-preload-152830" [e40c7c7b-b029-4f65-ac36-f4ff95eabc23] Running
	I0617 12:08:04.512313  164809 system_pods.go:61] "kube-controller-manager-no-preload-152830" [c2adec58-05a4-4993-b9a3-28f9ef519a63] Running
	I0617 12:08:04.512317  164809 system_pods.go:61] "kube-proxy-6c4hm" [a9830236-af96-437f-ad07-494b25f1a90e] Running
	I0617 12:08:04.512319  164809 system_pods.go:61] "kube-scheduler-no-preload-152830" [876671da-097b-43c1-9055-95c2ed7620aa] Running
	I0617 12:08:04.512325  164809 system_pods.go:61] "metrics-server-569cc877fc-zllzk" [e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:08:04.512329  164809 system_pods.go:61] "storage-provisioner" [b6cc7cdc-43f4-40c4-a202-5674fcdcedd0] Running
	I0617 12:08:04.512340  164809 system_pods.go:74] duration metric: took 178.914377ms to wait for pod list to return data ...
	I0617 12:08:04.512347  164809 default_sa.go:34] waiting for default service account to be created ...
	I0617 12:08:04.707834  164809 default_sa.go:45] found service account: "default"
	I0617 12:08:04.707874  164809 default_sa.go:55] duration metric: took 195.518331ms for default service account to be created ...
	I0617 12:08:04.707886  164809 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 12:08:04.916143  164809 system_pods.go:86] 9 kube-system pods found
	I0617 12:08:04.916173  164809 system_pods.go:89] "coredns-7db6d8ff4d-gjt84" [979c7339-3a4c-4bc8-8586-4d9da42339ae] Running
	I0617 12:08:04.916178  164809 system_pods.go:89] "coredns-7db6d8ff4d-vz7dg" [53c5188e-bc44-4aed-a989-ef3e2379c27b] Running
	I0617 12:08:04.916183  164809 system_pods.go:89] "etcd-no-preload-152830" [2b82d709-0776-470a-a538-f132b84be2e0] Running
	I0617 12:08:04.916187  164809 system_pods.go:89] "kube-apiserver-no-preload-152830" [e40c7c7b-b029-4f65-ac36-f4ff95eabc23] Running
	I0617 12:08:04.916191  164809 system_pods.go:89] "kube-controller-manager-no-preload-152830" [c2adec58-05a4-4993-b9a3-28f9ef519a63] Running
	I0617 12:08:04.916195  164809 system_pods.go:89] "kube-proxy-6c4hm" [a9830236-af96-437f-ad07-494b25f1a90e] Running
	I0617 12:08:04.916199  164809 system_pods.go:89] "kube-scheduler-no-preload-152830" [876671da-097b-43c1-9055-95c2ed7620aa] Running
	I0617 12:08:04.916211  164809 system_pods.go:89] "metrics-server-569cc877fc-zllzk" [e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:08:04.916219  164809 system_pods.go:89] "storage-provisioner" [b6cc7cdc-43f4-40c4-a202-5674fcdcedd0] Running
	I0617 12:08:04.916231  164809 system_pods.go:126] duration metric: took 208.336851ms to wait for k8s-apps to be running ...
	I0617 12:08:04.916245  164809 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 12:08:04.916306  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:08:04.933106  164809 system_svc.go:56] duration metric: took 16.850122ms WaitForService to wait for kubelet
	I0617 12:08:04.933135  164809 kubeadm.go:576] duration metric: took 4.866178671s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:08:04.933159  164809 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:08:05.108094  164809 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:08:05.108120  164809 node_conditions.go:123] node cpu capacity is 2
	I0617 12:08:05.108133  164809 node_conditions.go:105] duration metric: took 174.968414ms to run NodePressure ...
	I0617 12:08:05.108148  164809 start.go:240] waiting for startup goroutines ...
	I0617 12:08:05.108160  164809 start.go:245] waiting for cluster config update ...
	I0617 12:08:05.108173  164809 start.go:254] writing updated cluster config ...
	I0617 12:08:05.108496  164809 ssh_runner.go:195] Run: rm -f paused
	I0617 12:08:05.160610  164809 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 12:08:05.162777  164809 out.go:177] * Done! kubectl is now configured to use "no-preload-152830" cluster and "default" namespace by default
	I0617 12:08:40.686610  165698 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0617 12:08:40.686950  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:40.687194  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:08:45.687594  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:45.687820  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:08:55.688285  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:55.688516  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:15.689306  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:09:15.689556  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:55.688872  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:09:55.689162  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:55.689206  165698 kubeadm.go:309] 
	I0617 12:09:55.689284  165698 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0617 12:09:55.689342  165698 kubeadm.go:309] 		timed out waiting for the condition
	I0617 12:09:55.689354  165698 kubeadm.go:309] 
	I0617 12:09:55.689418  165698 kubeadm.go:309] 	This error is likely caused by:
	I0617 12:09:55.689480  165698 kubeadm.go:309] 		- The kubelet is not running
	I0617 12:09:55.689632  165698 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0617 12:09:55.689657  165698 kubeadm.go:309] 
	I0617 12:09:55.689791  165698 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0617 12:09:55.689844  165698 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0617 12:09:55.689916  165698 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0617 12:09:55.689926  165698 kubeadm.go:309] 
	I0617 12:09:55.690059  165698 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0617 12:09:55.690140  165698 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0617 12:09:55.690159  165698 kubeadm.go:309] 
	I0617 12:09:55.690258  165698 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0617 12:09:55.690343  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0617 12:09:55.690434  165698 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0617 12:09:55.690530  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0617 12:09:55.690546  165698 kubeadm.go:309] 
	I0617 12:09:55.691495  165698 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:09:55.691595  165698 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0617 12:09:55.691708  165698 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0617 12:09:55.691787  165698 kubeadm.go:393] duration metric: took 7m57.151326537s to StartCluster
	I0617 12:09:55.691844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:09:55.691904  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:09:55.746514  165698 cri.go:89] found id: ""
	I0617 12:09:55.746550  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.746563  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:09:55.746572  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:09:55.746636  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:09:55.789045  165698 cri.go:89] found id: ""
	I0617 12:09:55.789083  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.789095  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:09:55.789103  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:09:55.789169  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:09:55.829492  165698 cri.go:89] found id: ""
	I0617 12:09:55.829533  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.829542  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:09:55.829547  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:09:55.829614  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:09:55.865213  165698 cri.go:89] found id: ""
	I0617 12:09:55.865246  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.865262  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:09:55.865267  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:09:55.865318  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:09:55.904067  165698 cri.go:89] found id: ""
	I0617 12:09:55.904102  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.904113  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:09:55.904122  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:09:55.904187  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:09:55.938441  165698 cri.go:89] found id: ""
	I0617 12:09:55.938471  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.938478  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:09:55.938487  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:09:55.938538  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:09:55.975669  165698 cri.go:89] found id: ""
	I0617 12:09:55.975710  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.975723  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:09:55.975731  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:09:55.975804  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:09:56.015794  165698 cri.go:89] found id: ""
	I0617 12:09:56.015826  165698 logs.go:276] 0 containers: []
	W0617 12:09:56.015837  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:09:56.015851  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:09:56.015868  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:09:56.095533  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:09:56.095557  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:09:56.095573  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:09:56.220817  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:09:56.220857  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:09:56.261470  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:09:56.261507  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:09:56.325626  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:09:56.325673  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0617 12:09:56.345438  165698 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0617 12:09:56.345491  165698 out.go:239] * 
	W0617 12:09:56.345606  165698 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0617 12:09:56.345635  165698 out.go:239] * 
	W0617 12:09:56.346583  165698 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 12:09:56.349928  165698 out.go:177] 
	W0617 12:09:56.351067  165698 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0617 12:09:56.351127  165698 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0617 12:09:56.351157  165698 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0617 12:09:56.352487  165698 out.go:177] 
	
	
	==> CRI-O <==
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.477557247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626741477529900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7677eee-ae8d-4ec5-be0c-d5db2f9df88e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.478180608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae3f0a43-81af-4cef-87dd-7f3622ff5d4c name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.478273279Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae3f0a43-81af-4cef-87dd-7f3622ff5d4c name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.478324601Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ae3f0a43-81af-4cef-87dd-7f3622ff5d4c name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.509716782Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d73cf6a1-3c2e-4410-86ca-1c8889ecfc5c name=/runtime.v1.RuntimeService/Version
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.509800990Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d73cf6a1-3c2e-4410-86ca-1c8889ecfc5c name=/runtime.v1.RuntimeService/Version
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.511362340Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=14ef1460-c2c6-4081-b8d8-361e793f9b2d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.511741964Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626741511722886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14ef1460-c2c6-4081-b8d8-361e793f9b2d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.512534906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=159e435f-8156-4ee5-ac1c-9a8decdd5103 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.512588187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=159e435f-8156-4ee5-ac1c-9a8decdd5103 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.512619489Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=159e435f-8156-4ee5-ac1c-9a8decdd5103 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.547102668Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef54763e-d9bf-4b84-bd0b-95bb9b168dad name=/runtime.v1.RuntimeService/Version
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.547184523Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef54763e-d9bf-4b84-bd0b-95bb9b168dad name=/runtime.v1.RuntimeService/Version
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.548515424Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eed903c4-ba78-4ced-9bef-ce81ea77eaad name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.548960125Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626741548922164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eed903c4-ba78-4ced-9bef-ce81ea77eaad name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.549583641Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22e86ef3-6b3b-4cd5-b35b-4d2a57074d31 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.549636806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22e86ef3-6b3b-4cd5-b35b-4d2a57074d31 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.549665600Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=22e86ef3-6b3b-4cd5-b35b-4d2a57074d31 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.584220745Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f7e21f00-a231-4de2-a2e0-cac4808348cf name=/runtime.v1.RuntimeService/Version
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.584310088Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7e21f00-a231-4de2-a2e0-cac4808348cf name=/runtime.v1.RuntimeService/Version
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.585619882Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4dedb34d-6e0f-4b43-a264-d846384e5aaf name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.585988668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626741585970056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4dedb34d-6e0f-4b43-a264-d846384e5aaf name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.586524447Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6bd9abe7-8daf-4a38-9984-8cafe10eebd5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.586575471Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6bd9abe7-8daf-4a38-9984-8cafe10eebd5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:19:01 old-k8s-version-003661 crio[648]: time="2024-06-17 12:19:01.586608632Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6bd9abe7-8daf-4a38-9984-8cafe10eebd5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jun17 12:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052255] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040891] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.660385] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.359181] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.617809] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.763068] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.058957] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067517] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.195874] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.192469] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.318746] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.241976] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.062935] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.770270] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	[Jun17 12:02] kauditd_printk_skb: 46 callbacks suppressed
	[Jun17 12:06] systemd-fstab-generator[5023]: Ignoring "noauto" option for root device
	[Jun17 12:08] systemd-fstab-generator[5303]: Ignoring "noauto" option for root device
	[  +0.068765] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:19:01 up 17 min,  0 users,  load average: 0.12, 0.08, 0.04
	Linux old-k8s-version-003661 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 17 12:18:56 old-k8s-version-003661 kubelet[6484]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0000d8e00, 0xc000b58840, 0x1, 0x0, 0x0)
	Jun 17 12:18:56 old-k8s-version-003661 kubelet[6484]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Jun 17 12:18:56 old-k8s-version-003661 kubelet[6484]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc00086da40)
	Jun 17 12:18:56 old-k8s-version-003661 kubelet[6484]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Jun 17 12:18:56 old-k8s-version-003661 kubelet[6484]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jun 17 12:18:56 old-k8s-version-003661 kubelet[6484]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Jun 17 12:18:56 old-k8s-version-003661 kubelet[6484]: goroutine 134 [select]:
	Jun 17 12:18:56 old-k8s-version-003661 kubelet[6484]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0005dd220, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jun 17 12:18:56 old-k8s-version-003661 kubelet[6484]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jun 17 12:18:56 old-k8s-version-003661 kubelet[6484]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001d86c0, 0x0, 0x0)
	Jun 17 12:18:56 old-k8s-version-003661 kubelet[6484]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jun 17 12:18:56 old-k8s-version-003661 kubelet[6484]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc00086da40)
	Jun 17 12:18:56 old-k8s-version-003661 kubelet[6484]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jun 17 12:18:56 old-k8s-version-003661 kubelet[6484]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jun 17 12:18:56 old-k8s-version-003661 kubelet[6484]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jun 17 12:18:56 old-k8s-version-003661 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 17 12:18:56 old-k8s-version-003661 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 17 12:18:57 old-k8s-version-003661 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jun 17 12:18:57 old-k8s-version-003661 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 17 12:18:57 old-k8s-version-003661 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 17 12:18:57 old-k8s-version-003661 kubelet[6493]: I0617 12:18:57.628118    6493 server.go:416] Version: v1.20.0
	Jun 17 12:18:57 old-k8s-version-003661 kubelet[6493]: I0617 12:18:57.628385    6493 server.go:837] Client rotation is on, will bootstrap in background
	Jun 17 12:18:57 old-k8s-version-003661 kubelet[6493]: I0617 12:18:57.630445    6493 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 17 12:18:57 old-k8s-version-003661 kubelet[6493]: W0617 12:18:57.631505    6493 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jun 17 12:18:57 old-k8s-version-003661 kubelet[6493]: I0617 12:18:57.631708    6493 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-003661 -n old-k8s-version-003661
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-003661 -n old-k8s-version-003661: exit status 2 (248.788697ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-003661" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (462.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-136195 -n embed-certs-136195
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-06-17 12:22:53.699365238 +0000 UTC m=+5919.542923386
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-136195 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-136195 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.747µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-136195 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-136195 -n embed-certs-136195
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-136195 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-136195 logs -n 25: (1.22661085s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-136195            | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:55 UTC |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	| delete  | -p                                                     | disable-driver-mounts-960277 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | disable-driver-mounts-960277                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:56 UTC |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-152830                  | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-152830                                   | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC | 17 Jun 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-136195                 | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-003661        | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC | 17 Jun 24 12:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-991309  | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:57 UTC | 17 Jun 24 11:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:57 UTC |                     |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-003661                              | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC | 17 Jun 24 11:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-003661             | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC | 17 Jun 24 11:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-003661                              | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-991309       | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:59 UTC | 17 Jun 24 12:06 UTC |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-003661                              | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 12:22 UTC | 17 Jun 24 12:22 UTC |
	| start   | -p newest-cni-335949 --memory=2200 --alsologtostderr   | newest-cni-335949            | jenkins | v1.33.1 | 17 Jun 24 12:22 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-152830                                   | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 12:22 UTC | 17 Jun 24 12:22 UTC |
	| start   | -p auto-253383 --memory=3072                           | auto-253383                  | jenkins | v1.33.1 | 17 Jun 24 12:22 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 12:22:33
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 12:22:33.746637  173004 out.go:291] Setting OutFile to fd 1 ...
	I0617 12:22:33.746872  173004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:22:33.746882  173004 out.go:304] Setting ErrFile to fd 2...
	I0617 12:22:33.746886  173004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:22:33.747084  173004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 12:22:33.747746  173004 out.go:298] Setting JSON to false
	I0617 12:22:33.748685  173004 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7501,"bootTime":1718619453,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 12:22:33.748755  173004 start.go:139] virtualization: kvm guest
	I0617 12:22:33.751214  173004 out.go:177] * [auto-253383] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 12:22:33.752625  173004 notify.go:220] Checking for updates...
	I0617 12:22:33.752629  173004 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 12:22:33.753988  173004 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 12:22:33.755218  173004 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:22:33.756853  173004 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 12:22:33.759451  173004 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 12:22:33.760821  173004 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 12:22:33.762384  173004 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:22:33.762520  173004 config.go:182] Loaded profile config "embed-certs-136195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:22:33.762659  173004 config.go:182] Loaded profile config "newest-cni-335949": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:22:33.762778  173004 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 12:22:33.800748  173004 out.go:177] * Using the kvm2 driver based on user configuration
	I0617 12:22:33.802161  173004 start.go:297] selected driver: kvm2
	I0617 12:22:33.802201  173004 start.go:901] validating driver "kvm2" against <nil>
	I0617 12:22:33.802219  173004 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 12:22:33.802940  173004 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 12:22:33.803058  173004 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 12:22:33.820257  173004 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 12:22:33.820318  173004 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 12:22:33.820561  173004 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:22:33.820613  173004 cni.go:84] Creating CNI manager for ""
	I0617 12:22:33.820622  173004 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:22:33.820634  173004 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 12:22:33.820675  173004 start.go:340] cluster config:
	{Name:auto-253383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:auto-253383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:22:33.820771  173004 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 12:22:33.822822  173004 out.go:177] * Starting "auto-253383" primary control-plane node in "auto-253383" cluster
	I0617 12:22:33.824074  173004 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:22:33.824120  173004 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 12:22:33.824135  173004 cache.go:56] Caching tarball of preloaded images
	I0617 12:22:33.824249  173004 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 12:22:33.824262  173004 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 12:22:33.824379  173004 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/auto-253383/config.json ...
	I0617 12:22:33.824404  173004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/auto-253383/config.json: {Name:mk7f949b6e84f1f315785dd46af65264f3336ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:22:33.824552  173004 start.go:360] acquireMachinesLock for auto-253383: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 12:22:33.824588  173004 start.go:364] duration metric: took 20.203µs to acquireMachinesLock for "auto-253383"
	I0617 12:22:33.824612  173004 start.go:93] Provisioning new machine with config: &{Name:auto-253383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:auto-253383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:22:33.824708  173004 start.go:125] createHost starting for "" (driver="kvm2")
	I0617 12:22:35.483134  172544 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.384850488s)
	I0617 12:22:35.483169  172544 crio.go:469] duration metric: took 2.384955896s to extract the tarball
	I0617 12:22:35.483179  172544 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:22:35.525532  172544 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:22:35.579691  172544 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 12:22:35.579715  172544 cache_images.go:84] Images are preloaded, skipping loading
	I0617 12:22:35.579741  172544 kubeadm.go:928] updating node { 192.168.61.120 8443 v1.30.1 crio true true} ...
	I0617 12:22:35.579885  172544 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-335949 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:newest-cni-335949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:22:35.579971  172544 ssh_runner.go:195] Run: crio config
	I0617 12:22:35.636232  172544 cni.go:84] Creating CNI manager for ""
	I0617 12:22:35.636264  172544 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:22:35.636280  172544 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0617 12:22:35.636320  172544 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.120 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-335949 NodeName:newest-cni-335949 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.61.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:22:35.636565  172544 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-335949"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:22:35.636663  172544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:22:35.647596  172544 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:22:35.647672  172544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:22:35.657750  172544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0617 12:22:35.682122  172544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:22:35.703223  172544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0617 12:22:35.723762  172544 ssh_runner.go:195] Run: grep 192.168.61.120	control-plane.minikube.internal$ /etc/hosts
	I0617 12:22:35.728904  172544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:22:35.742145  172544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:22:35.867606  172544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:22:35.885744  172544 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949 for IP: 192.168.61.120
	I0617 12:22:35.885771  172544 certs.go:194] generating shared ca certs ...
	I0617 12:22:35.885790  172544 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:22:35.885971  172544 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:22:35.886023  172544 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:22:35.886036  172544 certs.go:256] generating profile certs ...
	I0617 12:22:35.886107  172544 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/client.key
	I0617 12:22:35.886126  172544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/client.crt with IP's: []
	I0617 12:22:35.968687  172544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/client.crt ...
	I0617 12:22:35.968721  172544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/client.crt: {Name:mk7d48abb7e7455539f4d2f26607af3c528813e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:22:35.968920  172544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/client.key ...
	I0617 12:22:35.968934  172544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/client.key: {Name:mk2a027d63e5c4cdd69b2fa735016899d3e22277 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:22:35.969063  172544 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/apiserver.key.3926455b
	I0617 12:22:35.969092  172544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/apiserver.crt.3926455b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.120]
	I0617 12:22:36.201849  172544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/apiserver.crt.3926455b ...
	I0617 12:22:36.201884  172544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/apiserver.crt.3926455b: {Name:mk14651a2cd869f940c4c65155388bae004b399b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:22:36.202090  172544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/apiserver.key.3926455b ...
	I0617 12:22:36.202112  172544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/apiserver.key.3926455b: {Name:mke9b727298b3ea0c38acf9fdb99ea038b9529a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:22:36.202216  172544 certs.go:381] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/apiserver.crt.3926455b -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/apiserver.crt
	I0617 12:22:36.202353  172544 certs.go:385] copying /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/apiserver.key.3926455b -> /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/apiserver.key
	I0617 12:22:36.202447  172544 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/proxy-client.key
	I0617 12:22:36.202477  172544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/proxy-client.crt with IP's: []
	I0617 12:22:36.253826  172544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/proxy-client.crt ...
	I0617 12:22:36.253857  172544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/proxy-client.crt: {Name:mk462f705c99c49ae63e06cf3e0dcca7254be958 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:22:36.254030  172544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/proxy-client.key ...
	I0617 12:22:36.254047  172544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/proxy-client.key: {Name:mk5989bb2fdbc5c59883fb5b48fa5ff471ae8be7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:22:36.254231  172544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:22:36.254279  172544 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:22:36.254294  172544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:22:36.254328  172544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:22:36.254362  172544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:22:36.254393  172544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:22:36.254447  172544 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:22:36.255041  172544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:22:36.286038  172544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:22:36.317877  172544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:22:36.349419  172544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:22:36.375678  172544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0617 12:22:36.404233  172544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:22:36.432225  172544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:22:36.460523  172544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:22:36.487616  172544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:22:36.519845  172544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:22:36.546132  172544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:22:36.576963  172544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:22:36.598287  172544 ssh_runner.go:195] Run: openssl version
	I0617 12:22:36.604653  172544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:22:36.615885  172544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:22:36.620809  172544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:22:36.620910  172544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:22:36.627284  172544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:22:36.638378  172544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:22:36.649737  172544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:22:36.654887  172544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:22:36.654928  172544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:22:36.661738  172544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:22:36.673039  172544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:22:36.685746  172544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:22:36.690910  172544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:22:36.690985  172544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:22:36.697110  172544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:22:36.714073  172544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:22:36.722919  172544 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0617 12:22:36.722997  172544 kubeadm.go:391] StartCluster: {Name:newest-cni-335949 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:newest-cni-335949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.120 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:22:36.723125  172544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:22:36.723207  172544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:22:36.779575  172544 cri.go:89] found id: ""
	I0617 12:22:36.779655  172544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0617 12:22:36.794283  172544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:22:36.811954  172544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:22:36.823094  172544 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:22:36.823119  172544 kubeadm.go:156] found existing configuration files:
	
	I0617 12:22:36.823173  172544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:22:36.837032  172544 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:22:36.837094  172544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:22:36.848735  172544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:22:36.859354  172544 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:22:36.859423  172544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:22:36.869602  172544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:22:36.880116  172544 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:22:36.880184  172544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:22:36.890673  172544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:22:36.901449  172544 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:22:36.901519  172544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:22:36.915338  172544 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:22:37.180050  172544 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:22:33.826476  173004 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 12:22:33.826614  173004 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:22:33.826669  173004 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:22:33.842659  173004 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35897
	I0617 12:22:33.843087  173004 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:22:33.843671  173004 main.go:141] libmachine: Using API Version  1
	I0617 12:22:33.843694  173004 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:22:33.844064  173004 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:22:33.844298  173004 main.go:141] libmachine: (auto-253383) Calling .GetMachineName
	I0617 12:22:33.844502  173004 main.go:141] libmachine: (auto-253383) Calling .DriverName
	I0617 12:22:33.844700  173004 start.go:159] libmachine.API.Create for "auto-253383" (driver="kvm2")
	I0617 12:22:33.844736  173004 client.go:168] LocalClient.Create starting
	I0617 12:22:33.844784  173004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem
	I0617 12:22:33.844828  173004 main.go:141] libmachine: Decoding PEM data...
	I0617 12:22:33.844850  173004 main.go:141] libmachine: Parsing certificate...
	I0617 12:22:33.844919  173004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem
	I0617 12:22:33.844941  173004 main.go:141] libmachine: Decoding PEM data...
	I0617 12:22:33.844952  173004 main.go:141] libmachine: Parsing certificate...
	I0617 12:22:33.844968  173004 main.go:141] libmachine: Running pre-create checks...
	I0617 12:22:33.844987  173004 main.go:141] libmachine: (auto-253383) Calling .PreCreateCheck
	I0617 12:22:33.845378  173004 main.go:141] libmachine: (auto-253383) Calling .GetConfigRaw
	I0617 12:22:33.845818  173004 main.go:141] libmachine: Creating machine...
	I0617 12:22:33.845837  173004 main.go:141] libmachine: (auto-253383) Calling .Create
	I0617 12:22:33.845976  173004 main.go:141] libmachine: (auto-253383) Creating KVM machine...
	I0617 12:22:33.847732  173004 main.go:141] libmachine: (auto-253383) DBG | found existing default KVM network
	I0617 12:22:33.849647  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:33.849471  173027 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000296110}
	I0617 12:22:33.849671  173004 main.go:141] libmachine: (auto-253383) DBG | created network xml: 
	I0617 12:22:33.849706  173004 main.go:141] libmachine: (auto-253383) DBG | <network>
	I0617 12:22:33.849719  173004 main.go:141] libmachine: (auto-253383) DBG |   <name>mk-auto-253383</name>
	I0617 12:22:33.849729  173004 main.go:141] libmachine: (auto-253383) DBG |   <dns enable='no'/>
	I0617 12:22:33.849736  173004 main.go:141] libmachine: (auto-253383) DBG |   
	I0617 12:22:33.849778  173004 main.go:141] libmachine: (auto-253383) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0617 12:22:33.849806  173004 main.go:141] libmachine: (auto-253383) DBG |     <dhcp>
	I0617 12:22:33.849818  173004 main.go:141] libmachine: (auto-253383) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0617 12:22:33.849828  173004 main.go:141] libmachine: (auto-253383) DBG |     </dhcp>
	I0617 12:22:33.849838  173004 main.go:141] libmachine: (auto-253383) DBG |   </ip>
	I0617 12:22:33.849863  173004 main.go:141] libmachine: (auto-253383) DBG |   
	I0617 12:22:33.849875  173004 main.go:141] libmachine: (auto-253383) DBG | </network>
	I0617 12:22:33.849883  173004 main.go:141] libmachine: (auto-253383) DBG | 
	I0617 12:22:33.855667  173004 main.go:141] libmachine: (auto-253383) DBG | trying to create private KVM network mk-auto-253383 192.168.39.0/24...
	I0617 12:22:33.940858  173004 main.go:141] libmachine: (auto-253383) DBG | private KVM network mk-auto-253383 192.168.39.0/24 created
	I0617 12:22:33.940888  173004 main.go:141] libmachine: (auto-253383) Setting up store path in /home/jenkins/minikube-integration/19084-112967/.minikube/machines/auto-253383 ...
	I0617 12:22:33.940900  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:33.940865  173027 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 12:22:33.940921  173004 main.go:141] libmachine: (auto-253383) Building disk image from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso
	I0617 12:22:33.941022  173004 main.go:141] libmachine: (auto-253383) Downloading /home/jenkins/minikube-integration/19084-112967/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0617 12:22:34.193113  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:34.192969  173027 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/auto-253383/id_rsa...
	I0617 12:22:34.283025  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:34.282864  173027 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/auto-253383/auto-253383.rawdisk...
	I0617 12:22:34.283073  173004 main.go:141] libmachine: (auto-253383) DBG | Writing magic tar header
	I0617 12:22:34.283092  173004 main.go:141] libmachine: (auto-253383) DBG | Writing SSH key tar header
	I0617 12:22:34.283104  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:34.283044  173027 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/auto-253383 ...
	I0617 12:22:34.283193  173004 main.go:141] libmachine: (auto-253383) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/auto-253383
	I0617 12:22:34.283242  173004 main.go:141] libmachine: (auto-253383) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/auto-253383 (perms=drwx------)
	I0617 12:22:34.283262  173004 main.go:141] libmachine: (auto-253383) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines (perms=drwxr-xr-x)
	I0617 12:22:34.283272  173004 main.go:141] libmachine: (auto-253383) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines
	I0617 12:22:34.283298  173004 main.go:141] libmachine: (auto-253383) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 12:22:34.283313  173004 main.go:141] libmachine: (auto-253383) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967
	I0617 12:22:34.283330  173004 main.go:141] libmachine: (auto-253383) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube (perms=drwxr-xr-x)
	I0617 12:22:34.283348  173004 main.go:141] libmachine: (auto-253383) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967 (perms=drwxrwxr-x)
	I0617 12:22:34.283361  173004 main.go:141] libmachine: (auto-253383) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0617 12:22:34.283374  173004 main.go:141] libmachine: (auto-253383) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0617 12:22:34.283388  173004 main.go:141] libmachine: (auto-253383) DBG | Checking permissions on dir: /home/jenkins
	I0617 12:22:34.283398  173004 main.go:141] libmachine: (auto-253383) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0617 12:22:34.283413  173004 main.go:141] libmachine: (auto-253383) Creating domain...
	I0617 12:22:34.283424  173004 main.go:141] libmachine: (auto-253383) DBG | Checking permissions on dir: /home
	I0617 12:22:34.283434  173004 main.go:141] libmachine: (auto-253383) DBG | Skipping /home - not owner
	I0617 12:22:34.284807  173004 main.go:141] libmachine: (auto-253383) define libvirt domain using xml: 
	I0617 12:22:34.284834  173004 main.go:141] libmachine: (auto-253383) <domain type='kvm'>
	I0617 12:22:34.284846  173004 main.go:141] libmachine: (auto-253383)   <name>auto-253383</name>
	I0617 12:22:34.284854  173004 main.go:141] libmachine: (auto-253383)   <memory unit='MiB'>3072</memory>
	I0617 12:22:34.284863  173004 main.go:141] libmachine: (auto-253383)   <vcpu>2</vcpu>
	I0617 12:22:34.284870  173004 main.go:141] libmachine: (auto-253383)   <features>
	I0617 12:22:34.284882  173004 main.go:141] libmachine: (auto-253383)     <acpi/>
	I0617 12:22:34.284890  173004 main.go:141] libmachine: (auto-253383)     <apic/>
	I0617 12:22:34.284901  173004 main.go:141] libmachine: (auto-253383)     <pae/>
	I0617 12:22:34.284923  173004 main.go:141] libmachine: (auto-253383)     
	I0617 12:22:34.284940  173004 main.go:141] libmachine: (auto-253383)   </features>
	I0617 12:22:34.284950  173004 main.go:141] libmachine: (auto-253383)   <cpu mode='host-passthrough'>
	I0617 12:22:34.284957  173004 main.go:141] libmachine: (auto-253383)   
	I0617 12:22:34.284966  173004 main.go:141] libmachine: (auto-253383)   </cpu>
	I0617 12:22:34.284973  173004 main.go:141] libmachine: (auto-253383)   <os>
	I0617 12:22:34.284993  173004 main.go:141] libmachine: (auto-253383)     <type>hvm</type>
	I0617 12:22:34.285005  173004 main.go:141] libmachine: (auto-253383)     <boot dev='cdrom'/>
	I0617 12:22:34.285013  173004 main.go:141] libmachine: (auto-253383)     <boot dev='hd'/>
	I0617 12:22:34.285028  173004 main.go:141] libmachine: (auto-253383)     <bootmenu enable='no'/>
	I0617 12:22:34.285039  173004 main.go:141] libmachine: (auto-253383)   </os>
	I0617 12:22:34.285050  173004 main.go:141] libmachine: (auto-253383)   <devices>
	I0617 12:22:34.285062  173004 main.go:141] libmachine: (auto-253383)     <disk type='file' device='cdrom'>
	I0617 12:22:34.285076  173004 main.go:141] libmachine: (auto-253383)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/auto-253383/boot2docker.iso'/>
	I0617 12:22:34.285089  173004 main.go:141] libmachine: (auto-253383)       <target dev='hdc' bus='scsi'/>
	I0617 12:22:34.285096  173004 main.go:141] libmachine: (auto-253383)       <readonly/>
	I0617 12:22:34.285107  173004 main.go:141] libmachine: (auto-253383)     </disk>
	I0617 12:22:34.285117  173004 main.go:141] libmachine: (auto-253383)     <disk type='file' device='disk'>
	I0617 12:22:34.285133  173004 main.go:141] libmachine: (auto-253383)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0617 12:22:34.285150  173004 main.go:141] libmachine: (auto-253383)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/auto-253383/auto-253383.rawdisk'/>
	I0617 12:22:34.285160  173004 main.go:141] libmachine: (auto-253383)       <target dev='hda' bus='virtio'/>
	I0617 12:22:34.285189  173004 main.go:141] libmachine: (auto-253383)     </disk>
	I0617 12:22:34.285212  173004 main.go:141] libmachine: (auto-253383)     <interface type='network'>
	I0617 12:22:34.285226  173004 main.go:141] libmachine: (auto-253383)       <source network='mk-auto-253383'/>
	I0617 12:22:34.285252  173004 main.go:141] libmachine: (auto-253383)       <model type='virtio'/>
	I0617 12:22:34.285264  173004 main.go:141] libmachine: (auto-253383)     </interface>
	I0617 12:22:34.285274  173004 main.go:141] libmachine: (auto-253383)     <interface type='network'>
	I0617 12:22:34.285287  173004 main.go:141] libmachine: (auto-253383)       <source network='default'/>
	I0617 12:22:34.285298  173004 main.go:141] libmachine: (auto-253383)       <model type='virtio'/>
	I0617 12:22:34.285311  173004 main.go:141] libmachine: (auto-253383)     </interface>
	I0617 12:22:34.285324  173004 main.go:141] libmachine: (auto-253383)     <serial type='pty'>
	I0617 12:22:34.285337  173004 main.go:141] libmachine: (auto-253383)       <target port='0'/>
	I0617 12:22:34.285347  173004 main.go:141] libmachine: (auto-253383)     </serial>
	I0617 12:22:34.285360  173004 main.go:141] libmachine: (auto-253383)     <console type='pty'>
	I0617 12:22:34.285370  173004 main.go:141] libmachine: (auto-253383)       <target type='serial' port='0'/>
	I0617 12:22:34.285380  173004 main.go:141] libmachine: (auto-253383)     </console>
	I0617 12:22:34.285394  173004 main.go:141] libmachine: (auto-253383)     <rng model='virtio'>
	I0617 12:22:34.285409  173004 main.go:141] libmachine: (auto-253383)       <backend model='random'>/dev/random</backend>
	I0617 12:22:34.285419  173004 main.go:141] libmachine: (auto-253383)     </rng>
	I0617 12:22:34.285429  173004 main.go:141] libmachine: (auto-253383)     
	I0617 12:22:34.285439  173004 main.go:141] libmachine: (auto-253383)     
	I0617 12:22:34.285450  173004 main.go:141] libmachine: (auto-253383)   </devices>
	I0617 12:22:34.285459  173004 main.go:141] libmachine: (auto-253383) </domain>
	I0617 12:22:34.285490  173004 main.go:141] libmachine: (auto-253383) 
	I0617 12:22:34.289822  173004 main.go:141] libmachine: (auto-253383) DBG | domain auto-253383 has defined MAC address 52:54:00:eb:0b:b7 in network default
	I0617 12:22:34.290466  173004 main.go:141] libmachine: (auto-253383) Ensuring networks are active...
	I0617 12:22:34.290492  173004 main.go:141] libmachine: (auto-253383) DBG | domain auto-253383 has defined MAC address 52:54:00:37:82:c9 in network mk-auto-253383
	I0617 12:22:34.291287  173004 main.go:141] libmachine: (auto-253383) Ensuring network default is active
	I0617 12:22:34.291819  173004 main.go:141] libmachine: (auto-253383) Ensuring network mk-auto-253383 is active
	I0617 12:22:34.292412  173004 main.go:141] libmachine: (auto-253383) Getting domain xml...
	I0617 12:22:34.293377  173004 main.go:141] libmachine: (auto-253383) Creating domain...
	I0617 12:22:35.678339  173004 main.go:141] libmachine: (auto-253383) Waiting to get IP...
	I0617 12:22:35.679122  173004 main.go:141] libmachine: (auto-253383) DBG | domain auto-253383 has defined MAC address 52:54:00:37:82:c9 in network mk-auto-253383
	I0617 12:22:35.679554  173004 main.go:141] libmachine: (auto-253383) DBG | unable to find current IP address of domain auto-253383 in network mk-auto-253383
	I0617 12:22:35.679585  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:35.679532  173027 retry.go:31] will retry after 187.711017ms: waiting for machine to come up
	I0617 12:22:35.869183  173004 main.go:141] libmachine: (auto-253383) DBG | domain auto-253383 has defined MAC address 52:54:00:37:82:c9 in network mk-auto-253383
	I0617 12:22:35.869755  173004 main.go:141] libmachine: (auto-253383) DBG | unable to find current IP address of domain auto-253383 in network mk-auto-253383
	I0617 12:22:35.869784  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:35.869705  173027 retry.go:31] will retry after 320.984698ms: waiting for machine to come up
	I0617 12:22:36.192491  173004 main.go:141] libmachine: (auto-253383) DBG | domain auto-253383 has defined MAC address 52:54:00:37:82:c9 in network mk-auto-253383
	I0617 12:22:36.193018  173004 main.go:141] libmachine: (auto-253383) DBG | unable to find current IP address of domain auto-253383 in network mk-auto-253383
	I0617 12:22:36.193049  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:36.192967  173027 retry.go:31] will retry after 465.868787ms: waiting for machine to come up
	I0617 12:22:36.660633  173004 main.go:141] libmachine: (auto-253383) DBG | domain auto-253383 has defined MAC address 52:54:00:37:82:c9 in network mk-auto-253383
	I0617 12:22:36.661215  173004 main.go:141] libmachine: (auto-253383) DBG | unable to find current IP address of domain auto-253383 in network mk-auto-253383
	I0617 12:22:36.661244  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:36.661176  173027 retry.go:31] will retry after 374.62418ms: waiting for machine to come up
	I0617 12:22:37.037881  173004 main.go:141] libmachine: (auto-253383) DBG | domain auto-253383 has defined MAC address 52:54:00:37:82:c9 in network mk-auto-253383
	I0617 12:22:37.038403  173004 main.go:141] libmachine: (auto-253383) DBG | unable to find current IP address of domain auto-253383 in network mk-auto-253383
	I0617 12:22:37.038433  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:37.038382  173027 retry.go:31] will retry after 596.819413ms: waiting for machine to come up
	I0617 12:22:37.637559  173004 main.go:141] libmachine: (auto-253383) DBG | domain auto-253383 has defined MAC address 52:54:00:37:82:c9 in network mk-auto-253383
	I0617 12:22:37.638101  173004 main.go:141] libmachine: (auto-253383) DBG | unable to find current IP address of domain auto-253383 in network mk-auto-253383
	I0617 12:22:37.638137  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:37.638070  173027 retry.go:31] will retry after 837.043272ms: waiting for machine to come up
	I0617 12:22:38.476821  173004 main.go:141] libmachine: (auto-253383) DBG | domain auto-253383 has defined MAC address 52:54:00:37:82:c9 in network mk-auto-253383
	I0617 12:22:38.477379  173004 main.go:141] libmachine: (auto-253383) DBG | unable to find current IP address of domain auto-253383 in network mk-auto-253383
	I0617 12:22:38.477409  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:38.477318  173027 retry.go:31] will retry after 1.006243903s: waiting for machine to come up
	I0617 12:22:39.485566  173004 main.go:141] libmachine: (auto-253383) DBG | domain auto-253383 has defined MAC address 52:54:00:37:82:c9 in network mk-auto-253383
	I0617 12:22:39.486036  173004 main.go:141] libmachine: (auto-253383) DBG | unable to find current IP address of domain auto-253383 in network mk-auto-253383
	I0617 12:22:39.486063  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:39.485987  173027 retry.go:31] will retry after 999.791ms: waiting for machine to come up
	I0617 12:22:40.487281  173004 main.go:141] libmachine: (auto-253383) DBG | domain auto-253383 has defined MAC address 52:54:00:37:82:c9 in network mk-auto-253383
	I0617 12:22:40.487783  173004 main.go:141] libmachine: (auto-253383) DBG | unable to find current IP address of domain auto-253383 in network mk-auto-253383
	I0617 12:22:40.487813  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:40.487731  173027 retry.go:31] will retry after 1.504104877s: waiting for machine to come up
	I0617 12:22:41.993493  173004 main.go:141] libmachine: (auto-253383) DBG | domain auto-253383 has defined MAC address 52:54:00:37:82:c9 in network mk-auto-253383
	I0617 12:22:41.994036  173004 main.go:141] libmachine: (auto-253383) DBG | unable to find current IP address of domain auto-253383 in network mk-auto-253383
	I0617 12:22:41.994065  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:41.993950  173027 retry.go:31] will retry after 1.566577938s: waiting for machine to come up
	I0617 12:22:43.561961  173004 main.go:141] libmachine: (auto-253383) DBG | domain auto-253383 has defined MAC address 52:54:00:37:82:c9 in network mk-auto-253383
	I0617 12:22:43.562530  173004 main.go:141] libmachine: (auto-253383) DBG | unable to find current IP address of domain auto-253383 in network mk-auto-253383
	I0617 12:22:43.562674  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:43.562606  173027 retry.go:31] will retry after 1.897142649s: waiting for machine to come up
	I0617 12:22:47.616346  172544 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0617 12:22:47.616460  172544 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:22:47.616560  172544 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:22:47.616676  172544 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:22:47.616784  172544 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:22:47.616853  172544 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:22:47.618011  172544 out.go:204]   - Generating certificates and keys ...
	I0617 12:22:47.618106  172544 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:22:47.618183  172544 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:22:47.618262  172544 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0617 12:22:47.618346  172544 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0617 12:22:47.618444  172544 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0617 12:22:47.618524  172544 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0617 12:22:47.618575  172544 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0617 12:22:47.618704  172544 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-335949] and IPs [192.168.61.120 127.0.0.1 ::1]
	I0617 12:22:47.618779  172544 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0617 12:22:47.618964  172544 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-335949] and IPs [192.168.61.120 127.0.0.1 ::1]
	I0617 12:22:47.619057  172544 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0617 12:22:47.619152  172544 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0617 12:22:47.619209  172544 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0617 12:22:47.619256  172544 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:22:47.619299  172544 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:22:47.619380  172544 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0617 12:22:47.619473  172544 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:22:47.619543  172544 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:22:47.619609  172544 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:22:47.619719  172544 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:22:47.619815  172544 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:22:47.621858  172544 out.go:204]   - Booting up control plane ...
	I0617 12:22:47.621970  172544 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:22:47.622046  172544 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:22:47.622121  172544 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:22:47.622223  172544 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:22:47.622298  172544 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:22:47.622358  172544 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:22:47.622502  172544 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0617 12:22:47.622564  172544 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0617 12:22:47.622653  172544 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.716173ms
	I0617 12:22:47.622754  172544 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0617 12:22:47.622837  172544 kubeadm.go:309] [api-check] The API server is healthy after 5.00212012s
	I0617 12:22:47.622976  172544 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0617 12:22:47.623085  172544 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0617 12:22:47.623146  172544 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0617 12:22:47.623302  172544 kubeadm.go:309] [mark-control-plane] Marking the node newest-cni-335949 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0617 12:22:47.623354  172544 kubeadm.go:309] [bootstrap-token] Using token: 76en1w.i0xw8fxzi3sgh0u1
	I0617 12:22:47.624506  172544 out.go:204]   - Configuring RBAC rules ...
	I0617 12:22:47.624597  172544 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0617 12:22:47.624702  172544 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0617 12:22:47.624831  172544 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0617 12:22:47.624981  172544 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0617 12:22:47.625136  172544 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0617 12:22:47.625264  172544 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0617 12:22:47.625420  172544 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0617 12:22:47.625466  172544 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0617 12:22:47.625506  172544 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0617 12:22:47.625512  172544 kubeadm.go:309] 
	I0617 12:22:47.625560  172544 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0617 12:22:47.625570  172544 kubeadm.go:309] 
	I0617 12:22:47.625637  172544 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0617 12:22:47.625643  172544 kubeadm.go:309] 
	I0617 12:22:47.625667  172544 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0617 12:22:47.625716  172544 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0617 12:22:47.625798  172544 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0617 12:22:47.625811  172544 kubeadm.go:309] 
	I0617 12:22:47.625889  172544 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0617 12:22:47.625898  172544 kubeadm.go:309] 
	I0617 12:22:47.625969  172544 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0617 12:22:47.625984  172544 kubeadm.go:309] 
	I0617 12:22:47.626069  172544 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0617 12:22:47.626178  172544 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0617 12:22:47.626277  172544 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0617 12:22:47.626288  172544 kubeadm.go:309] 
	I0617 12:22:47.626400  172544 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0617 12:22:47.626502  172544 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0617 12:22:47.626512  172544 kubeadm.go:309] 
	I0617 12:22:47.626624  172544 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 76en1w.i0xw8fxzi3sgh0u1 \
	I0617 12:22:47.626771  172544 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 \
	I0617 12:22:47.626808  172544 kubeadm.go:309] 	--control-plane 
	I0617 12:22:47.626817  172544 kubeadm.go:309] 
	I0617 12:22:47.626936  172544 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0617 12:22:47.626950  172544 kubeadm.go:309] 
	I0617 12:22:47.627023  172544 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 76en1w.i0xw8fxzi3sgh0u1 \
	I0617 12:22:47.627126  172544 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 
	I0617 12:22:47.627137  172544 cni.go:84] Creating CNI manager for ""
	I0617 12:22:47.627143  172544 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:22:47.629170  172544 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:22:47.630278  172544 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:22:47.642683  172544 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:22:47.662960  172544 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:22:47.663044  172544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-335949 minikube.k8s.io/updated_at=2024_06_17T12_22_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6 minikube.k8s.io/name=newest-cni-335949 minikube.k8s.io/primary=true
	I0617 12:22:47.663047  172544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:22:47.698034  172544 ops.go:34] apiserver oom_adj: -16
	I0617 12:22:47.842793  172544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:22:48.343123  172544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:22:45.463262  173004 main.go:141] libmachine: (auto-253383) DBG | domain auto-253383 has defined MAC address 52:54:00:37:82:c9 in network mk-auto-253383
	I0617 12:22:45.463819  173004 main.go:141] libmachine: (auto-253383) DBG | unable to find current IP address of domain auto-253383 in network mk-auto-253383
	I0617 12:22:45.463846  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:45.463798  173027 retry.go:31] will retry after 2.998582965s: waiting for machine to come up
	I0617 12:22:48.465049  173004 main.go:141] libmachine: (auto-253383) DBG | domain auto-253383 has defined MAC address 52:54:00:37:82:c9 in network mk-auto-253383
	I0617 12:22:48.465610  173004 main.go:141] libmachine: (auto-253383) DBG | unable to find current IP address of domain auto-253383 in network mk-auto-253383
	I0617 12:22:48.465634  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:48.465544  173027 retry.go:31] will retry after 3.964625464s: waiting for machine to come up
	I0617 12:22:48.842876  172544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:22:49.343570  172544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:22:49.843370  172544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:22:50.343108  172544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:22:50.843634  172544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:22:51.343226  172544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:22:51.843192  172544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:22:52.343693  172544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:22:52.843791  172544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:22:53.343531  172544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:22:52.432728  173004 main.go:141] libmachine: (auto-253383) DBG | domain auto-253383 has defined MAC address 52:54:00:37:82:c9 in network mk-auto-253383
	I0617 12:22:52.433193  173004 main.go:141] libmachine: (auto-253383) DBG | unable to find current IP address of domain auto-253383 in network mk-auto-253383
	I0617 12:22:52.433215  173004 main.go:141] libmachine: (auto-253383) DBG | I0617 12:22:52.433147  173027 retry.go:31] will retry after 4.846437852s: waiting for machine to come up
	
	
	==> CRI-O <==
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.319859728Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0a2d4ee66975d8028039fe41452e1f2a3fb6571100f902ae428772608308b49d,Metadata:&PodSandboxMetadata{Name:busybox,Uid:05a900e3-7714-4af1-ace9-eb03535da64a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718625711134864787,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05a900e3-7714-4af1-ace9-eb03535da64a,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-17T12:01:43.275313058Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d8cdc6ff01f9171f3ad315ea48c690b50791c26874d90e0420b89d4f6c80d6d5,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-9bbjg,Uid:1ba0eee5-436e-4c83-b5ce-3c907d66b641,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718625711134392
143,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-9bbjg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0eee5-436e-4c83-b5ce-3c907d66b641,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-17T12:01:43.275314206Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1af3cb62105a73d434e484e72559660a14ec011bcc32694be3ec1f62d4004705,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-dmhfs,Uid:31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718625709332818495,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-dmhfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-17T12:01:43.
275311852Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:da11ecedffb5492af81e1296b913c7844da92a6a33a7d5a0471890adac6ae58f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4b04a38a-5006-4496-a24d-0940029193de,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718625703591082943,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b04a38a-5006-4496-a24d-0940029193de,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-
minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-17T12:01:43.275317477Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:00f5ac611dd3173bd63432f2166f9b1c1515e0164ca44a072d3500c52b9ac720,Metadata:&PodSandboxMetadata{Name:kube-proxy-25d5n,Uid:52b6d09a-899f-40c4-b1f3-7842ae755165,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718625703587572781,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-25d5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52b6d09a-899f-40c4-b1f3-7842ae755165,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.i
o/config.seen: 2024-06-17T12:01:43.275309072Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e946fe67c58448b571b7b99a84f90edf971ba4599fa70e58a8abcdff5d97d4ae,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-136195,Uid:4ffc4724b55482bd6618c26321a6ec7a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718625698765086218,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffc4724b55482bd6618c26321a6ec7a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.199:8443,kubernetes.io/config.hash: 4ffc4724b55482bd6618c26321a6ec7a,kubernetes.io/config.seen: 2024-06-17T12:01:38.273841985Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2379b3f0e4841a43b541f5c15e5a70b752ffd5c366eed4c8b63518687ad29e5b,Metadata:&PodSandboxMetadat
a{Name:kube-scheduler-embed-certs-136195,Uid:c01d6f22a5109112fd47d72421c8a716,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718625698752855653,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c01d6f22a5109112fd47d72421c8a716,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c01d6f22a5109112fd47d72421c8a716,kubernetes.io/config.seen: 2024-06-17T12:01:38.273850152Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1243fcd3dd29fe226f3b2c1f3b185d07e05e8284a3a283c3adacfbb73c41a86c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-136195,Uid:dd5b41313a2a936cb8a7ac0d4d722ccb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718625698739897536,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ku
be-controller-manager-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5b41313a2a936cb8a7ac0d4d722ccb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dd5b41313a2a936cb8a7ac0d4d722ccb,kubernetes.io/config.seen: 2024-06-17T12:01:38.273848407Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6c616f25aff9be709d7133636307a067a952b328aab78ddf130784fdc9d42883,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-136195,Uid:6212321f2ec0f29eea9399e7bace28fb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718625698738047293,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6212321f2ec0f29eea9399e7bace28fb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.199:2379,kubernetes.io/config.hash: 6212321f2ec0f29eea9399e7ba
ce28fb,kubernetes.io/config.seen: 2024-06-17T12:01:38.273851314Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5f119d64-8b07-4408-b61e-cdcf7c39caef name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.320718915Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a69cc24-6747-42b2-b8c5-f4c4a2758f58 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.320792201Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a69cc24-6747-42b2-b8c5-f4c4a2758f58 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.322150176Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:06fc8c454b52ae190c5e04968df2f4b778b273df8fd868edece76e82e1aa618e,PodSandboxId:0a2d4ee66975d8028039fe41452e1f2a3fb6571100f902ae428772608308b49d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718625712494553608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05a900e3-7714-4af1-ace9-eb03535da64a,},Annotations:map[string]string{io.kubernetes.container.hash: 95ceef43,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7,PodSandboxId:d8cdc6ff01f9171f3ad315ea48c690b50791c26874d90e0420b89d4f6c80d6d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718625711516797941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9bbjg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0eee5-436e-4c83-b5ce-3c907d66b641,},Annotations:map[string]string{io.kubernetes.container.hash: 9e5353ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92,PodSandboxId:da11ecedffb5492af81e1296b913c7844da92a6a33a7d5a0471890adac6ae58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718625704434508824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4b04a38a-5006-4496-a24d-0940029193de,},Annotations:map[string]string{io.kubernetes.container.hash: bbb7a6ad,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d,PodSandboxId:00f5ac611dd3173bd63432f2166f9b1c1515e0164ca44a072d3500c52b9ac720,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718625703706073514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25d5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52b6d09a-899f-40c4-b
1f3-7842ae755165,},Annotations:map[string]string{io.kubernetes.container.hash: 23086a39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d,PodSandboxId:2379b3f0e4841a43b541f5c15e5a70b752ffd5c366eed4c8b63518687ad29e5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625699292863859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c01d6f22a5109112fd47d72421c8
a716,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9,PodSandboxId:6c616f25aff9be709d7133636307a067a952b328aab78ddf130784fdc9d42883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625699295147383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6212321f2ec0f29eea9399e7bace28fb,},Annotations:map[string]string{io.ku
bernetes.container.hash: b38de5c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3,PodSandboxId:e946fe67c58448b571b7b99a84f90edf971ba4599fa70e58a8abcdff5d97d4ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625699299463271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffc4724b55482bd6618c26321a6ec7a,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7db5fa0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079,PodSandboxId:1243fcd3dd29fe226f3b2c1f3b185d07e05e8284a3a283c3adacfbb73c41a86c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625699286383405,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5b41313a2a936cb8a7ac0d4d722ccb,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a69cc24-6747-42b2-b8c5-f4c4a2758f58 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.366657784Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da6a6da0-1b73-4adf-92df-e770294dfef4 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.366769586Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da6a6da0-1b73-4adf-92df-e770294dfef4 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.368620259Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b5731c16-80e4-45e0-a1cb-40c9cded1b80 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.369092915Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626974369068347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5731c16-80e4-45e0-a1cb-40c9cded1b80 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.369584696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2e75e16-429c-45b2-8b19-b3325d571b06 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.369664498Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2e75e16-429c-45b2-8b19-b3325d571b06 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.369902383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:06fc8c454b52ae190c5e04968df2f4b778b273df8fd868edece76e82e1aa618e,PodSandboxId:0a2d4ee66975d8028039fe41452e1f2a3fb6571100f902ae428772608308b49d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718625712494553608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05a900e3-7714-4af1-ace9-eb03535da64a,},Annotations:map[string]string{io.kubernetes.container.hash: 95ceef43,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7,PodSandboxId:d8cdc6ff01f9171f3ad315ea48c690b50791c26874d90e0420b89d4f6c80d6d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718625711516797941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9bbjg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0eee5-436e-4c83-b5ce-3c907d66b641,},Annotations:map[string]string{io.kubernetes.container.hash: 9e5353ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92,PodSandboxId:da11ecedffb5492af81e1296b913c7844da92a6a33a7d5a0471890adac6ae58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718625704434508824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4b04a38a-5006-4496-a24d-0940029193de,},Annotations:map[string]string{io.kubernetes.container.hash: bbb7a6ad,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36,PodSandboxId:da11ecedffb5492af81e1296b913c7844da92a6a33a7d5a0471890adac6ae58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718625703747285662,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4b04a38a-5006-4496-a24d-0940029193de,},Annotations:map[string]string{io.kubernetes.container.hash: bbb7a6ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d,PodSandboxId:00f5ac611dd3173bd63432f2166f9b1c1515e0164ca44a072d3500c52b9ac720,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718625703706073514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25d5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52b6d09a-899f-40c4-b1f3-7842ae755
165,},Annotations:map[string]string{io.kubernetes.container.hash: 23086a39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d,PodSandboxId:2379b3f0e4841a43b541f5c15e5a70b752ffd5c366eed4c8b63518687ad29e5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625699292863859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c01d6f22a5109112fd47d72421c8a716,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9,PodSandboxId:6c616f25aff9be709d7133636307a067a952b328aab78ddf130784fdc9d42883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625699295147383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6212321f2ec0f29eea9399e7bace28fb,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b38de5c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3,PodSandboxId:e946fe67c58448b571b7b99a84f90edf971ba4599fa70e58a8abcdff5d97d4ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625699299463271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffc4724b55482bd6618c26321a6ec7a,},Annotations:map[string]string{io.kubernetes.container.hash:
7db5fa0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079,PodSandboxId:1243fcd3dd29fe226f3b2c1f3b185d07e05e8284a3a283c3adacfbb73c41a86c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625699286383405,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5b41313a2a936cb8a7ac0d4d722ccb,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2e75e16-429c-45b2-8b19-b3325d571b06 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.413729343Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=559c2f46-23a3-4510-b86a-a97ab6cac437 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.413822330Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=559c2f46-23a3-4510-b86a-a97ab6cac437 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.415244727Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1d5bd48-0812-4e20-8a72-d83b319ed475 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.415626615Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626974415604357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1d5bd48-0812-4e20-8a72-d83b319ed475 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.416421988Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22b81c68-cc1c-4b6c-8ace-4d195b7b063c name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.416503815Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22b81c68-cc1c-4b6c-8ace-4d195b7b063c name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.416684139Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:06fc8c454b52ae190c5e04968df2f4b778b273df8fd868edece76e82e1aa618e,PodSandboxId:0a2d4ee66975d8028039fe41452e1f2a3fb6571100f902ae428772608308b49d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718625712494553608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05a900e3-7714-4af1-ace9-eb03535da64a,},Annotations:map[string]string{io.kubernetes.container.hash: 95ceef43,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7,PodSandboxId:d8cdc6ff01f9171f3ad315ea48c690b50791c26874d90e0420b89d4f6c80d6d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718625711516797941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9bbjg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0eee5-436e-4c83-b5ce-3c907d66b641,},Annotations:map[string]string{io.kubernetes.container.hash: 9e5353ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92,PodSandboxId:da11ecedffb5492af81e1296b913c7844da92a6a33a7d5a0471890adac6ae58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718625704434508824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4b04a38a-5006-4496-a24d-0940029193de,},Annotations:map[string]string{io.kubernetes.container.hash: bbb7a6ad,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36,PodSandboxId:da11ecedffb5492af81e1296b913c7844da92a6a33a7d5a0471890adac6ae58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718625703747285662,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4b04a38a-5006-4496-a24d-0940029193de,},Annotations:map[string]string{io.kubernetes.container.hash: bbb7a6ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d,PodSandboxId:00f5ac611dd3173bd63432f2166f9b1c1515e0164ca44a072d3500c52b9ac720,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718625703706073514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25d5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52b6d09a-899f-40c4-b1f3-7842ae755
165,},Annotations:map[string]string{io.kubernetes.container.hash: 23086a39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d,PodSandboxId:2379b3f0e4841a43b541f5c15e5a70b752ffd5c366eed4c8b63518687ad29e5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625699292863859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c01d6f22a5109112fd47d72421c8a716,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9,PodSandboxId:6c616f25aff9be709d7133636307a067a952b328aab78ddf130784fdc9d42883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625699295147383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6212321f2ec0f29eea9399e7bace28fb,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b38de5c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3,PodSandboxId:e946fe67c58448b571b7b99a84f90edf971ba4599fa70e58a8abcdff5d97d4ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625699299463271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffc4724b55482bd6618c26321a6ec7a,},Annotations:map[string]string{io.kubernetes.container.hash:
7db5fa0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079,PodSandboxId:1243fcd3dd29fe226f3b2c1f3b185d07e05e8284a3a283c3adacfbb73c41a86c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625699286383405,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5b41313a2a936cb8a7ac0d4d722ccb,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22b81c68-cc1c-4b6c-8ace-4d195b7b063c name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.452057304Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ecda9517-ae02-41eb-9add-fdd6cfb0407e name=/runtime.v1.RuntimeService/Version
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.452188086Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ecda9517-ae02-41eb-9add-fdd6cfb0407e name=/runtime.v1.RuntimeService/Version
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.453599739Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b52eba5-c416-4e69-ad3b-98924d05a2c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.454394927Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626974454361156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b52eba5-c416-4e69-ad3b-98924d05a2c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.455062575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59a22390-e305-4942-95d2-0fa21c71c798 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.455139680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59a22390-e305-4942-95d2-0fa21c71c798 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:54 embed-certs-136195 crio[729]: time="2024-06-17 12:22:54.455330966Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:06fc8c454b52ae190c5e04968df2f4b778b273df8fd868edece76e82e1aa618e,PodSandboxId:0a2d4ee66975d8028039fe41452e1f2a3fb6571100f902ae428772608308b49d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718625712494553608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05a900e3-7714-4af1-ace9-eb03535da64a,},Annotations:map[string]string{io.kubernetes.container.hash: 95ceef43,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7,PodSandboxId:d8cdc6ff01f9171f3ad315ea48c690b50791c26874d90e0420b89d4f6c80d6d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718625711516797941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9bbjg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0eee5-436e-4c83-b5ce-3c907d66b641,},Annotations:map[string]string{io.kubernetes.container.hash: 9e5353ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92,PodSandboxId:da11ecedffb5492af81e1296b913c7844da92a6a33a7d5a0471890adac6ae58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718625704434508824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4b04a38a-5006-4496-a24d-0940029193de,},Annotations:map[string]string{io.kubernetes.container.hash: bbb7a6ad,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36,PodSandboxId:da11ecedffb5492af81e1296b913c7844da92a6a33a7d5a0471890adac6ae58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718625703747285662,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4b04a38a-5006-4496-a24d-0940029193de,},Annotations:map[string]string{io.kubernetes.container.hash: bbb7a6ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d,PodSandboxId:00f5ac611dd3173bd63432f2166f9b1c1515e0164ca44a072d3500c52b9ac720,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718625703706073514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25d5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52b6d09a-899f-40c4-b1f3-7842ae755
165,},Annotations:map[string]string{io.kubernetes.container.hash: 23086a39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d,PodSandboxId:2379b3f0e4841a43b541f5c15e5a70b752ffd5c366eed4c8b63518687ad29e5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625699292863859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c01d6f22a5109112fd47d72421c8a716,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9,PodSandboxId:6c616f25aff9be709d7133636307a067a952b328aab78ddf130784fdc9d42883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625699295147383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6212321f2ec0f29eea9399e7bace28fb,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b38de5c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3,PodSandboxId:e946fe67c58448b571b7b99a84f90edf971ba4599fa70e58a8abcdff5d97d4ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625699299463271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffc4724b55482bd6618c26321a6ec7a,},Annotations:map[string]string{io.kubernetes.container.hash:
7db5fa0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079,PodSandboxId:1243fcd3dd29fe226f3b2c1f3b185d07e05e8284a3a283c3adacfbb73c41a86c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625699286383405,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-136195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5b41313a2a936cb8a7ac0d4d722ccb,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59a22390-e305-4942-95d2-0fa21c71c798 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	06fc8c454b52a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   0a2d4ee66975d       busybox
	c610c7cafac56       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      21 minutes ago      Running             coredns                   1                   d8cdc6ff01f91       coredns-7db6d8ff4d-9bbjg
	02e13a25f376f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   da11ecedffb54       storage-provisioner
	7a03f8aca2ce9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   da11ecedffb54       storage-provisioner
	c2c534f434b08       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      21 minutes ago      Running             kube-proxy                1                   00f5ac611dd31       kube-proxy-25d5n
	5e7549e074802       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      21 minutes ago      Running             kube-apiserver            1                   e946fe67c5844       kube-apiserver-embed-certs-136195
	fb99e2cd3471d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      21 minutes ago      Running             etcd                      1                   6c616f25aff9b       etcd-embed-certs-136195
	157a0a3401555       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      21 minutes ago      Running             kube-scheduler            1                   2379b3f0e4841       kube-scheduler-embed-certs-136195
	2436d81981855       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      21 minutes ago      Running             kube-controller-manager   1                   1243fcd3dd29f       kube-controller-manager-embed-certs-136195
	
	
	==> coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55932 - 50197 "HINFO IN 4346171118022230615.3943262594765871989. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028664505s
	
	
	==> describe nodes <==
	Name:               embed-certs-136195
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-136195
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=embed-certs-136195
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T11_53_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:53:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-136195
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 12:22:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 12:22:39 +0000   Mon, 17 Jun 2024 11:53:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 12:22:39 +0000   Mon, 17 Jun 2024 11:53:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 12:22:39 +0000   Mon, 17 Jun 2024 11:53:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 12:22:39 +0000   Mon, 17 Jun 2024 12:01:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.199
	  Hostname:    embed-certs-136195
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1899a7a26ff4dfea374ed2fa1ef0511
	  System UUID:                f1899a7a-26ff-4dfe-a374-ed2fa1ef0511
	  Boot ID:                    6cf9c77d-8415-4e84-a4b7-6d0c2ee58ca7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-9bbjg                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-embed-certs-136195                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-136195             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-136195    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-25d5n                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-embed-certs-136195             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-dmhfs               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-136195 status is now: NodeHasSufficientMemory
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-136195 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-136195 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-136195 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node embed-certs-136195 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-136195 event: Registered Node embed-certs-136195 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-136195 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-136195 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-136195 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-136195 event: Registered Node embed-certs-136195 in Controller
	
	
	==> dmesg <==
	[Jun17 12:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051624] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040263] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.519435] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.417283] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.586844] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.346490] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.060823] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058122] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.162800] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.140484] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.293567] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.430464] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.055705] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.666552] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +5.648221] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.337366] systemd-fstab-generator[1609]: Ignoring "noauto" option for root device
	[  +3.377392] kauditd_printk_skb: 67 callbacks suppressed
	[  +6.800863] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] <==
	{"level":"info","ts":"2024-06-17T12:01:41.437677Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-06-17T12:01:57.982739Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.141929ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1471709671445730966 > lease_revoke:<id:146c90260bb053df>","response":"size:27"}
	{"level":"warn","ts":"2024-06-17T12:02:18.430382Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.010997ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-dmhfs\" ","response":"range_response_count:1 size:4283"}
	{"level":"info","ts":"2024-06-17T12:02:18.43045Z","caller":"traceutil/trace.go:171","msg":"trace[1824285193] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-dmhfs; range_end:; response_count:1; response_revision:588; }","duration":"219.126134ms","start":"2024-06-17T12:02:18.2113Z","end":"2024-06-17T12:02:18.430426Z","steps":["trace[1824285193] 'range keys from in-memory index tree'  (duration: 218.865891ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T12:11:41.464956Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":817}
	{"level":"info","ts":"2024-06-17T12:11:41.474746Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":817,"took":"9.367947ms","hash":1255471678,"current-db-size-bytes":2711552,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2711552,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-06-17T12:11:41.474815Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1255471678,"revision":817,"compact-revision":-1}
	{"level":"info","ts":"2024-06-17T12:16:41.47222Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1060}
	{"level":"info","ts":"2024-06-17T12:16:41.476956Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1060,"took":"4.368534ms","hash":531864513,"current-db-size-bytes":2711552,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1658880,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-06-17T12:16:41.47708Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":531864513,"revision":1060,"compact-revision":817}
	{"level":"info","ts":"2024-06-17T12:21:41.481878Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1304}
	{"level":"info","ts":"2024-06-17T12:21:41.486263Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1304,"took":"3.972669ms","hash":3266786345,"current-db-size-bytes":2711552,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1630208,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-06-17T12:21:41.48635Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3266786345,"revision":1304,"compact-revision":1060}
	{"level":"warn","ts":"2024-06-17T12:22:36.666112Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.646075ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-17T12:22:36.666236Z","caller":"traceutil/trace.go:171","msg":"trace[178021224] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1591; }","duration":"182.900023ms","start":"2024-06-17T12:22:36.483289Z","end":"2024-06-17T12:22:36.666189Z","steps":["trace[178021224] 'range keys from in-memory index tree'  (duration: 182.596109ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T12:22:37.612023Z","caller":"traceutil/trace.go:171","msg":"trace[123405066] linearizableReadLoop","detail":"{readStateIndex:1881; appliedIndex:1880; }","duration":"325.314797ms","start":"2024-06-17T12:22:37.286651Z","end":"2024-06-17T12:22:37.611966Z","steps":["trace[123405066] 'read index received'  (duration: 325.174638ms)","trace[123405066] 'applied index is now lower than readState.Index'  (duration: 139.671µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-17T12:22:37.612227Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"325.576652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-17T12:22:37.612276Z","caller":"traceutil/trace.go:171","msg":"trace[1351748830] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:0; response_revision:1592; }","duration":"325.676416ms","start":"2024-06-17T12:22:37.28659Z","end":"2024-06-17T12:22:37.612266Z","steps":["trace[1351748830] 'agreement among raft nodes before linearized reading'  (duration: 325.564654ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T12:22:37.61233Z","caller":"traceutil/trace.go:171","msg":"trace[1504609401] transaction","detail":"{read_only:false; response_revision:1592; number_of_response:1; }","duration":"338.481966ms","start":"2024-06-17T12:22:37.273834Z","end":"2024-06-17T12:22:37.612316Z","steps":["trace[1504609401] 'process raft request'  (duration: 338.033853ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T12:22:37.612331Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-17T12:22:37.286573Z","time spent":"325.743428ms","remote":"127.0.0.1:33332","response type":"/etcdserverpb.KV/Range","request count":0,"request size":80,"response count":1,"response size":29,"request content":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true "}
	{"level":"warn","ts":"2024-06-17T12:22:37.613092Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-17T12:22:37.273819Z","time spent":"338.581366ms","remote":"127.0.0.1:33346","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":683,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-skct6nycsfpnqtopx27mkhxkui\" mod_revision:1584 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-skct6nycsfpnqtopx27mkhxkui\" value_size:610 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-skct6nycsfpnqtopx27mkhxkui\" > >"}
	{"level":"info","ts":"2024-06-17T12:22:37.873647Z","caller":"traceutil/trace.go:171","msg":"trace[1737965554] transaction","detail":"{read_only:false; response_revision:1593; number_of_response:1; }","duration":"114.705863ms","start":"2024-06-17T12:22:37.758921Z","end":"2024-06-17T12:22:37.873627Z","steps":["trace[1737965554] 'process raft request'  (duration: 114.467058ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T12:22:38.089149Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.486077ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-17T12:22:38.08929Z","caller":"traceutil/trace.go:171","msg":"trace[2070252208] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1593; }","duration":"163.664196ms","start":"2024-06-17T12:22:37.925611Z","end":"2024-06-17T12:22:38.089275Z","steps":["trace[2070252208] 'range keys from in-memory index tree'  (duration: 163.434357ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T12:22:39.266857Z","caller":"traceutil/trace.go:171","msg":"trace[1427388712] transaction","detail":"{read_only:false; response_revision:1595; number_of_response:1; }","duration":"114.179604ms","start":"2024-06-17T12:22:39.152663Z","end":"2024-06-17T12:22:39.266843Z","steps":["trace[1427388712] 'process raft request'  (duration: 114.071073ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:22:54 up 21 min,  0 users,  load average: 0.27, 0.17, 0.15
	Linux embed-certs-136195 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] <==
	I0617 12:17:43.813894       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:19:43.812479       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:19:43.812815       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:19:43.812847       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:19:43.814845       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:19:43.814902       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0617 12:19:43.814910       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:21:42.817312       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:21:42.817463       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0617 12:21:43.817675       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:21:43.817787       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0617 12:21:43.817815       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:21:43.817870       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:21:43.817951       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:21:43.819227       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:22:43.818162       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:22:43.818236       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0617 12:22:43.818254       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:22:43.819400       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:22:43.819809       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:22:43.819870       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] <==
	I0617 12:16:56.942288       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:17:26.425088       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:17:26.949897       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:17:56.430932       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:17:56.960647       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0617 12:17:59.334887       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="210.94µs"
	I0617 12:18:13.329644       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="99.318µs"
	E0617 12:18:26.436117       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:18:26.968457       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:18:56.441192       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:18:56.975699       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:19:26.446341       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:19:26.986612       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:19:56.452653       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:19:56.994740       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:20:26.457410       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:20:27.003419       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:20:56.463688       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:20:57.012546       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:21:26.469044       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:21:27.022950       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:21:56.475067       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:21:57.033740       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:22:26.479895       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:22:27.042507       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] <==
	I0617 12:01:43.995657       1 server_linux.go:69] "Using iptables proxy"
	I0617 12:01:44.005891       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.199"]
	I0617 12:01:44.060503       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0617 12:01:44.062664       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0617 12:01:44.062800       1 server_linux.go:165] "Using iptables Proxier"
	I0617 12:01:44.066148       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 12:01:44.066391       1 server.go:872] "Version info" version="v1.30.1"
	I0617 12:01:44.066423       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 12:01:44.067727       1 config.go:192] "Starting service config controller"
	I0617 12:01:44.067817       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 12:01:44.067855       1 config.go:101] "Starting endpoint slice config controller"
	I0617 12:01:44.067860       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0617 12:01:44.068581       1 config.go:319] "Starting node config controller"
	I0617 12:01:44.068609       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 12:01:44.168913       1 shared_informer.go:320] Caches are synced for node config
	I0617 12:01:44.168955       1 shared_informer.go:320] Caches are synced for service config
	I0617 12:01:44.169026       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] <==
	I0617 12:01:40.220617       1 serving.go:380] Generated self-signed cert in-memory
	W0617 12:01:42.752225       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0617 12:01:42.752321       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 12:01:42.752333       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0617 12:01:42.752339       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0617 12:01:42.798687       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0617 12:01:42.798727       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 12:01:42.800748       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0617 12:01:42.800862       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0617 12:01:42.800891       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0617 12:01:42.802067       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0617 12:01:42.901441       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 17 12:20:38 embed-certs-136195 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 12:20:48 embed-certs-136195 kubelet[940]: E0617 12:20:48.316877     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:20:59 embed-certs-136195 kubelet[940]: E0617 12:20:59.316412     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:21:10 embed-certs-136195 kubelet[940]: E0617 12:21:10.316927     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:21:23 embed-certs-136195 kubelet[940]: E0617 12:21:23.317874     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:21:34 embed-certs-136195 kubelet[940]: E0617 12:21:34.316619     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:21:38 embed-certs-136195 kubelet[940]: E0617 12:21:38.339038     940 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 12:21:38 embed-certs-136195 kubelet[940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 12:21:38 embed-certs-136195 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 12:21:38 embed-certs-136195 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 12:21:38 embed-certs-136195 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 12:21:47 embed-certs-136195 kubelet[940]: E0617 12:21:47.317306     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:21:58 embed-certs-136195 kubelet[940]: E0617 12:21:58.315747     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:22:10 embed-certs-136195 kubelet[940]: E0617 12:22:10.316666     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:22:22 embed-certs-136195 kubelet[940]: E0617 12:22:22.316303     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:22:36 embed-certs-136195 kubelet[940]: E0617 12:22:36.317707     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	Jun 17 12:22:38 embed-certs-136195 kubelet[940]: E0617 12:22:38.339380     940 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 12:22:38 embed-certs-136195 kubelet[940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 12:22:38 embed-certs-136195 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 12:22:38 embed-certs-136195 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 12:22:38 embed-certs-136195 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 12:22:47 embed-certs-136195 kubelet[940]: E0617 12:22:47.331384     940 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 17 12:22:47 embed-certs-136195 kubelet[940]: E0617 12:22:47.331579     940 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 17 12:22:47 embed-certs-136195 kubelet[940]: E0617 12:22:47.332462     940 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d4j4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-dmhfs_kube-system(31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jun 17 12:22:47 embed-certs-136195 kubelet[940]: E0617 12:22:47.332636     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-dmhfs" podUID="31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0"
	
	
	==> storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] <==
	I0617 12:01:44.533035       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0617 12:01:44.544756       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0617 12:01:44.544965       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0617 12:02:01.950535       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0617 12:02:01.950777       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-136195_206e0fc6-44a6-4e2b-90d8-19619e77516b!
	I0617 12:02:01.952621       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eaa2d4c6-0454-437c-9a6d-480f4e6de3d9", APIVersion:"v1", ResourceVersion:"565", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-136195_206e0fc6-44a6-4e2b-90d8-19619e77516b became leader
	I0617 12:02:02.051343       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-136195_206e0fc6-44a6-4e2b-90d8-19619e77516b!
	
	
	==> storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] <==
	I0617 12:01:43.917807       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0617 12:01:43.921755       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-136195 -n embed-certs-136195
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-136195 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-dmhfs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-136195 describe pod metrics-server-569cc877fc-dmhfs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-136195 describe pod metrics-server-569cc877fc-dmhfs: exit status 1 (64.218572ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-dmhfs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-136195 describe pod metrics-server-569cc877fc-dmhfs: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (462.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-06-17 12:24:57.008210895 +0000 UTC m=+6042.851769046
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-991309 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-991309 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (83.873015ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-991309 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-991309 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-991309 logs -n 25: (1.578483554s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-253383 sudo                               | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | systemctl status kubelet --all                       |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo                               | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo                               | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo cat                           | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo cat                           | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo                               | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo                               | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo cat                           | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo docker                        | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo                               | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo                               | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo cat                           | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo cat                           | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo                               | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo                               | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo                               | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo cat                           | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo cat                           | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo                               | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo                               | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo                               | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo find                          | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p kindnet-253383 sudo crio                          | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p kindnet-253383                                    | kindnet-253383 | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	| start   | -p bridge-253383 --memory=3072                       | bridge-253383  | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                |         |         |                     |                     |
	|         | --container-runtime=crio                             |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 12:24:37
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 12:24:37.961688  177757 out.go:291] Setting OutFile to fd 1 ...
	I0617 12:24:37.961829  177757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:24:37.961840  177757 out.go:304] Setting ErrFile to fd 2...
	I0617 12:24:37.961846  177757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:24:37.962129  177757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 12:24:37.962980  177757 out.go:298] Setting JSON to false
	I0617 12:24:37.964506  177757 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7625,"bootTime":1718619453,"procs":292,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 12:24:37.964599  177757 start.go:139] virtualization: kvm guest
	I0617 12:24:37.967083  177757 out.go:177] * [bridge-253383] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 12:24:37.968524  177757 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 12:24:37.969846  177757 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 12:24:37.968553  177757 notify.go:220] Checking for updates...
	I0617 12:24:37.972949  177757 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:24:37.974061  177757 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 12:24:37.975496  177757 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 12:24:37.977246  177757 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 12:24:37.979275  177757 config.go:182] Loaded profile config "calico-253383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:24:37.979474  177757 config.go:182] Loaded profile config "custom-flannel-253383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:24:37.979608  177757 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:24:37.979773  177757 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 12:24:38.026106  177757 out.go:177] * Using the kvm2 driver based on user configuration
	I0617 12:24:38.027397  177757 start.go:297] selected driver: kvm2
	I0617 12:24:38.027418  177757 start.go:901] validating driver "kvm2" against <nil>
	I0617 12:24:38.027433  177757 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 12:24:38.028484  177757 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 12:24:38.028576  177757 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 12:24:38.045373  177757 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 12:24:38.045445  177757 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 12:24:38.045754  177757 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:24:38.045844  177757 cni.go:84] Creating CNI manager for "bridge"
	I0617 12:24:38.045862  177757 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 12:24:38.045956  177757 start.go:340] cluster config:
	{Name:bridge-253383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:bridge-253383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:24:38.046123  177757 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 12:24:38.047898  177757 out.go:177] * Starting "bridge-253383" primary control-plane node in "bridge-253383" cluster
	I0617 12:24:35.176590  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:35.177175  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | unable to find current IP address of domain custom-flannel-253383 in network mk-custom-flannel-253383
	I0617 12:24:35.177205  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | I0617 12:24:35.177126  176622 retry.go:31] will retry after 1.508489485s: waiting for machine to come up
	I0617 12:24:36.686911  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:36.687415  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | unable to find current IP address of domain custom-flannel-253383 in network mk-custom-flannel-253383
	I0617 12:24:36.687440  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | I0617 12:24:36.687384  176622 retry.go:31] will retry after 1.486089167s: waiting for machine to come up
	I0617 12:24:38.175538  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:38.176224  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | unable to find current IP address of domain custom-flannel-253383 in network mk-custom-flannel-253383
	I0617 12:24:38.176256  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | I0617 12:24:38.176169  176622 retry.go:31] will retry after 2.043460707s: waiting for machine to come up
	I0617 12:24:35.313324  175636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:24:35.384879  175636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:24:35.403556  175636 ssh_runner.go:195] Run: openssl version
	I0617 12:24:35.410001  175636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:24:35.422371  175636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:24:35.427510  175636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:24:35.427564  175636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:24:35.435182  175636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:24:35.447443  175636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:24:35.460887  175636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:24:35.466094  175636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:24:35.466150  175636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:24:35.474267  175636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:24:35.487722  175636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:24:35.499712  175636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:24:35.504909  175636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:24:35.505008  175636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:24:35.512963  175636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:24:35.528236  175636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:24:35.532840  175636 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0617 12:24:35.532901  175636 kubeadm.go:391] StartCluster: {Name:calico-253383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:calico-253383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:24:35.533012  175636 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:24:35.533092  175636 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:24:35.577280  175636 cri.go:89] found id: ""
	I0617 12:24:35.577346  175636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0617 12:24:35.590006  175636 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:24:35.602341  175636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:24:35.613748  175636 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:24:35.613775  175636 kubeadm.go:156] found existing configuration files:
	
	I0617 12:24:35.613834  175636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:24:35.624429  175636 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:24:35.624511  175636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:24:35.634927  175636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:24:35.645969  175636 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:24:35.646030  175636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:24:35.659945  175636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:24:35.673574  175636 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:24:35.673641  175636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:24:35.687882  175636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:24:35.699194  175636 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:24:35.699250  175636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:24:35.709843  175636 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:24:35.915221  175636 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:24:38.049095  177757 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:24:38.049145  177757 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 12:24:38.049159  177757 cache.go:56] Caching tarball of preloaded images
	I0617 12:24:38.049264  177757 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 12:24:38.049280  177757 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 12:24:38.049406  177757 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/bridge-253383/config.json ...
	I0617 12:24:38.049433  177757 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/bridge-253383/config.json: {Name:mkd5f2f7cbb79b6d3596489f454cd753a36aad46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:24:38.049594  177757 start.go:360] acquireMachinesLock for bridge-253383: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 12:24:40.221892  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:40.222537  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | unable to find current IP address of domain custom-flannel-253383 in network mk-custom-flannel-253383
	I0617 12:24:40.222570  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | I0617 12:24:40.222490  176622 retry.go:31] will retry after 2.455795518s: waiting for machine to come up
	I0617 12:24:42.680629  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:42.681101  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | unable to find current IP address of domain custom-flannel-253383 in network mk-custom-flannel-253383
	I0617 12:24:42.681130  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | I0617 12:24:42.681050  176622 retry.go:31] will retry after 4.053126853s: waiting for machine to come up
	I0617 12:24:46.658553  175636 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0617 12:24:46.658608  175636 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:24:46.658683  175636 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:24:46.658785  175636 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:24:46.658916  175636 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:24:46.659050  175636 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:24:46.660869  175636 out.go:204]   - Generating certificates and keys ...
	I0617 12:24:46.660971  175636 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:24:46.661047  175636 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:24:46.661142  175636 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0617 12:24:46.661234  175636 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0617 12:24:46.661322  175636 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0617 12:24:46.661371  175636 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0617 12:24:46.661419  175636 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0617 12:24:46.661521  175636 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [calico-253383 localhost] and IPs [192.168.39.244 127.0.0.1 ::1]
	I0617 12:24:46.661599  175636 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0617 12:24:46.661770  175636 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [calico-253383 localhost] and IPs [192.168.39.244 127.0.0.1 ::1]
	I0617 12:24:46.661844  175636 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0617 12:24:46.661949  175636 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0617 12:24:46.662027  175636 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0617 12:24:46.662100  175636 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:24:46.662171  175636 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:24:46.662257  175636 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0617 12:24:46.662335  175636 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:24:46.662427  175636 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:24:46.662510  175636 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:24:46.662615  175636 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:24:46.662708  175636 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:24:46.664141  175636 out.go:204]   - Booting up control plane ...
	I0617 12:24:46.664241  175636 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:24:46.664329  175636 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:24:46.664404  175636 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:24:46.664534  175636 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:24:46.664669  175636 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:24:46.664724  175636 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:24:46.664880  175636 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0617 12:24:46.664948  175636 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0617 12:24:46.665000  175636 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.870419ms
	I0617 12:24:46.665080  175636 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0617 12:24:46.665171  175636 kubeadm.go:309] [api-check] The API server is healthy after 5.502996681s
	I0617 12:24:46.665290  175636 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0617 12:24:46.665430  175636 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0617 12:24:46.665510  175636 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0617 12:24:46.665690  175636 kubeadm.go:309] [mark-control-plane] Marking the node calico-253383 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0617 12:24:46.665759  175636 kubeadm.go:309] [bootstrap-token] Using token: fsdi1m.2ofaetuklz7eewge
	I0617 12:24:46.667201  175636 out.go:204]   - Configuring RBAC rules ...
	I0617 12:24:46.667294  175636 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0617 12:24:46.667375  175636 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0617 12:24:46.667511  175636 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0617 12:24:46.667620  175636 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0617 12:24:46.667722  175636 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0617 12:24:46.667817  175636 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0617 12:24:46.667956  175636 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0617 12:24:46.668018  175636 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0617 12:24:46.668106  175636 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0617 12:24:46.668123  175636 kubeadm.go:309] 
	I0617 12:24:46.668192  175636 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0617 12:24:46.668202  175636 kubeadm.go:309] 
	I0617 12:24:46.668305  175636 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0617 12:24:46.668313  175636 kubeadm.go:309] 
	I0617 12:24:46.668362  175636 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0617 12:24:46.668444  175636 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0617 12:24:46.668517  175636 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0617 12:24:46.668529  175636 kubeadm.go:309] 
	I0617 12:24:46.668612  175636 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0617 12:24:46.668631  175636 kubeadm.go:309] 
	I0617 12:24:46.668673  175636 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0617 12:24:46.668679  175636 kubeadm.go:309] 
	I0617 12:24:46.668726  175636 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0617 12:24:46.668787  175636 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0617 12:24:46.668845  175636 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0617 12:24:46.668849  175636 kubeadm.go:309] 
	I0617 12:24:46.668935  175636 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0617 12:24:46.669012  175636 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0617 12:24:46.669020  175636 kubeadm.go:309] 
	I0617 12:24:46.669086  175636 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fsdi1m.2ofaetuklz7eewge \
	I0617 12:24:46.669177  175636 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 \
	I0617 12:24:46.669197  175636 kubeadm.go:309] 	--control-plane 
	I0617 12:24:46.669201  175636 kubeadm.go:309] 
	I0617 12:24:46.669276  175636 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0617 12:24:46.669283  175636 kubeadm.go:309] 
	I0617 12:24:46.669363  175636 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fsdi1m.2ofaetuklz7eewge \
	I0617 12:24:46.669488  175636 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 
	I0617 12:24:46.669503  175636 cni.go:84] Creating CNI manager for "calico"
	I0617 12:24:46.671763  175636 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0617 12:24:46.736725  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:46.737243  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | unable to find current IP address of domain custom-flannel-253383 in network mk-custom-flannel-253383
	I0617 12:24:46.737271  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | I0617 12:24:46.737192  176622 retry.go:31] will retry after 4.278696957s: waiting for machine to come up
	I0617 12:24:46.673383  175636 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0617 12:24:46.673405  175636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (253815 bytes)
	I0617 12:24:46.702034  175636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0617 12:24:48.000097  175636 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.298021374s)
	I0617 12:24:48.000155  175636 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:24:48.000226  175636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:24:48.000305  175636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-253383 minikube.k8s.io/updated_at=2024_06_17T12_24_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6 minikube.k8s.io/name=calico-253383 minikube.k8s.io/primary=true
	I0617 12:24:48.149979  175636 ops.go:34] apiserver oom_adj: -16
	I0617 12:24:48.150152  175636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:24:48.650869  175636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:24:49.150406  175636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:24:49.650613  175636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:24:50.150881  175636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:24:52.660536  177757 start.go:364] duration metric: took 14.610898353s to acquireMachinesLock for "bridge-253383"
	I0617 12:24:52.660619  177757 start.go:93] Provisioning new machine with config: &{Name:bridge-253383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:bridge-253383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:24:52.660882  177757 start.go:125] createHost starting for "" (driver="kvm2")
	I0617 12:24:52.663860  177757 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 12:24:52.664081  177757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:24:52.664141  177757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:24:52.684923  177757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36969
	I0617 12:24:52.685455  177757 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:24:52.686100  177757 main.go:141] libmachine: Using API Version  1
	I0617 12:24:52.686125  177757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:24:52.686524  177757 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:24:52.686768  177757 main.go:141] libmachine: (bridge-253383) Calling .GetMachineName
	I0617 12:24:52.686934  177757 main.go:141] libmachine: (bridge-253383) Calling .DriverName
	I0617 12:24:52.687089  177757 start.go:159] libmachine.API.Create for "bridge-253383" (driver="kvm2")
	I0617 12:24:52.687132  177757 client.go:168] LocalClient.Create starting
	I0617 12:24:52.687167  177757 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem
	I0617 12:24:52.687202  177757 main.go:141] libmachine: Decoding PEM data...
	I0617 12:24:52.687222  177757 main.go:141] libmachine: Parsing certificate...
	I0617 12:24:52.687307  177757 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem
	I0617 12:24:52.687326  177757 main.go:141] libmachine: Decoding PEM data...
	I0617 12:24:52.687340  177757 main.go:141] libmachine: Parsing certificate...
	I0617 12:24:52.687361  177757 main.go:141] libmachine: Running pre-create checks...
	I0617 12:24:52.687378  177757 main.go:141] libmachine: (bridge-253383) Calling .PreCreateCheck
	I0617 12:24:52.687888  177757 main.go:141] libmachine: (bridge-253383) Calling .GetConfigRaw
	I0617 12:24:52.688343  177757 main.go:141] libmachine: Creating machine...
	I0617 12:24:52.688359  177757 main.go:141] libmachine: (bridge-253383) Calling .Create
	I0617 12:24:52.688489  177757 main.go:141] libmachine: (bridge-253383) Creating KVM machine...
	I0617 12:24:52.690034  177757 main.go:141] libmachine: (bridge-253383) DBG | found existing default KVM network
	I0617 12:24:52.691772  177757 main.go:141] libmachine: (bridge-253383) DBG | I0617 12:24:52.691597  177899 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:7e:14:38} reservation:<nil>}
	I0617 12:24:52.692868  177757 main.go:141] libmachine: (bridge-253383) DBG | I0617 12:24:52.692768  177899 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:2d:d3:23} reservation:<nil>}
	I0617 12:24:52.693909  177757 main.go:141] libmachine: (bridge-253383) DBG | I0617 12:24:52.693823  177899 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:39:cc:35} reservation:<nil>}
	I0617 12:24:52.695008  177757 main.go:141] libmachine: (bridge-253383) DBG | I0617 12:24:52.694920  177899 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000334bb0}
	I0617 12:24:52.695044  177757 main.go:141] libmachine: (bridge-253383) DBG | created network xml: 
	I0617 12:24:52.695053  177757 main.go:141] libmachine: (bridge-253383) DBG | <network>
	I0617 12:24:52.695066  177757 main.go:141] libmachine: (bridge-253383) DBG |   <name>mk-bridge-253383</name>
	I0617 12:24:52.695079  177757 main.go:141] libmachine: (bridge-253383) DBG |   <dns enable='no'/>
	I0617 12:24:52.695092  177757 main.go:141] libmachine: (bridge-253383) DBG |   
	I0617 12:24:52.695100  177757 main.go:141] libmachine: (bridge-253383) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0617 12:24:52.695110  177757 main.go:141] libmachine: (bridge-253383) DBG |     <dhcp>
	I0617 12:24:52.695119  177757 main.go:141] libmachine: (bridge-253383) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0617 12:24:52.695128  177757 main.go:141] libmachine: (bridge-253383) DBG |     </dhcp>
	I0617 12:24:52.695135  177757 main.go:141] libmachine: (bridge-253383) DBG |   </ip>
	I0617 12:24:52.695147  177757 main.go:141] libmachine: (bridge-253383) DBG |   
	I0617 12:24:52.695153  177757 main.go:141] libmachine: (bridge-253383) DBG | </network>
	I0617 12:24:52.695165  177757 main.go:141] libmachine: (bridge-253383) DBG | 
	I0617 12:24:52.700608  177757 main.go:141] libmachine: (bridge-253383) DBG | trying to create private KVM network mk-bridge-253383 192.168.72.0/24...
	I0617 12:24:52.782209  177757 main.go:141] libmachine: (bridge-253383) DBG | private KVM network mk-bridge-253383 192.168.72.0/24 created
	I0617 12:24:52.782239  177757 main.go:141] libmachine: (bridge-253383) Setting up store path in /home/jenkins/minikube-integration/19084-112967/.minikube/machines/bridge-253383 ...
	I0617 12:24:52.782248  177757 main.go:141] libmachine: (bridge-253383) DBG | I0617 12:24:52.782184  177899 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 12:24:52.782275  177757 main.go:141] libmachine: (bridge-253383) Building disk image from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso
	I0617 12:24:52.782360  177757 main.go:141] libmachine: (bridge-253383) Downloading /home/jenkins/minikube-integration/19084-112967/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0617 12:24:51.018135  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:51.018657  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has current primary IP address 192.168.61.105 and MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:51.018687  176137 main.go:141] libmachine: (custom-flannel-253383) Found IP for machine: 192.168.61.105
	I0617 12:24:51.018704  176137 main.go:141] libmachine: (custom-flannel-253383) Reserving static IP address...
	I0617 12:24:51.019158  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | unable to find host DHCP lease matching {name: "custom-flannel-253383", mac: "52:54:00:d6:f7:ee", ip: "192.168.61.105"} in network mk-custom-flannel-253383
	I0617 12:24:51.102692  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | Getting to WaitForSSH function...
	I0617 12:24:51.102725  176137 main.go:141] libmachine: (custom-flannel-253383) Reserved static IP address: 192.168.61.105
	I0617 12:24:51.102737  176137 main.go:141] libmachine: (custom-flannel-253383) Waiting for SSH to be available...
	I0617 12:24:51.105701  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:51.106200  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f7:ee", ip: ""} in network mk-custom-flannel-253383: {Iface:virbr3 ExpiryTime:2024-06-17 13:24:44 +0000 UTC Type:0 Mac:52:54:00:d6:f7:ee Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d6:f7:ee}
	I0617 12:24:51.106229  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined IP address 192.168.61.105 and MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:51.106384  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | Using SSH client type: external
	I0617 12:24:51.106412  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/custom-flannel-253383/id_rsa (-rw-------)
	I0617 12:24:51.106442  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/custom-flannel-253383/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:24:51.106455  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | About to run SSH command:
	I0617 12:24:51.106468  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | exit 0
	I0617 12:24:51.239907  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | SSH cmd err, output: <nil>: 
	I0617 12:24:51.240174  176137 main.go:141] libmachine: (custom-flannel-253383) KVM machine creation complete!
	I0617 12:24:51.240511  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetConfigRaw
	I0617 12:24:51.241055  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .DriverName
	I0617 12:24:51.241232  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .DriverName
	I0617 12:24:51.241357  176137 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0617 12:24:51.241368  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetState
	I0617 12:24:51.242745  176137 main.go:141] libmachine: Detecting operating system of created instance...
	I0617 12:24:51.242764  176137 main.go:141] libmachine: Waiting for SSH to be available...
	I0617 12:24:51.242772  176137 main.go:141] libmachine: Getting to WaitForSSH function...
	I0617 12:24:51.242782  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHHostname
	I0617 12:24:51.244901  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:51.245263  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f7:ee", ip: ""} in network mk-custom-flannel-253383: {Iface:virbr3 ExpiryTime:2024-06-17 13:24:44 +0000 UTC Type:0 Mac:52:54:00:d6:f7:ee Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:custom-flannel-253383 Clientid:01:52:54:00:d6:f7:ee}
	I0617 12:24:51.245313  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined IP address 192.168.61.105 and MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:51.245507  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHPort
	I0617 12:24:51.245704  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHKeyPath
	I0617 12:24:51.245864  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHKeyPath
	I0617 12:24:51.246012  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHUsername
	I0617 12:24:51.246174  176137 main.go:141] libmachine: Using SSH client type: native
	I0617 12:24:51.246435  176137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I0617 12:24:51.246450  176137 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0617 12:24:51.358893  176137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:24:51.358920  176137 main.go:141] libmachine: Detecting the provisioner...
	I0617 12:24:51.358928  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHHostname
	I0617 12:24:51.361938  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:51.362354  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f7:ee", ip: ""} in network mk-custom-flannel-253383: {Iface:virbr3 ExpiryTime:2024-06-17 13:24:44 +0000 UTC Type:0 Mac:52:54:00:d6:f7:ee Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:custom-flannel-253383 Clientid:01:52:54:00:d6:f7:ee}
	I0617 12:24:51.362382  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined IP address 192.168.61.105 and MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:51.362596  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHPort
	I0617 12:24:51.362840  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHKeyPath
	I0617 12:24:51.363043  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHKeyPath
	I0617 12:24:51.363179  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHUsername
	I0617 12:24:51.363373  176137 main.go:141] libmachine: Using SSH client type: native
	I0617 12:24:51.363623  176137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I0617 12:24:51.363643  176137 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0617 12:24:51.476341  176137 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0617 12:24:51.476439  176137 main.go:141] libmachine: found compatible host: buildroot
	I0617 12:24:51.476453  176137 main.go:141] libmachine: Provisioning with buildroot...
	I0617 12:24:51.476462  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetMachineName
	I0617 12:24:51.476707  176137 buildroot.go:166] provisioning hostname "custom-flannel-253383"
	I0617 12:24:51.476741  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetMachineName
	I0617 12:24:51.476904  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHHostname
	I0617 12:24:51.479807  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:51.480154  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f7:ee", ip: ""} in network mk-custom-flannel-253383: {Iface:virbr3 ExpiryTime:2024-06-17 13:24:44 +0000 UTC Type:0 Mac:52:54:00:d6:f7:ee Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:custom-flannel-253383 Clientid:01:52:54:00:d6:f7:ee}
	I0617 12:24:51.480186  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined IP address 192.168.61.105 and MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:51.480267  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHPort
	I0617 12:24:51.480470  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHKeyPath
	I0617 12:24:51.480632  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHKeyPath
	I0617 12:24:51.480777  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHUsername
	I0617 12:24:51.480961  176137 main.go:141] libmachine: Using SSH client type: native
	I0617 12:24:51.481180  176137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I0617 12:24:51.481198  176137 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-253383 && echo "custom-flannel-253383" | sudo tee /etc/hostname
	I0617 12:24:51.607108  176137 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-253383
	
	I0617 12:24:51.607170  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHHostname
	I0617 12:24:51.610124  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:51.610521  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f7:ee", ip: ""} in network mk-custom-flannel-253383: {Iface:virbr3 ExpiryTime:2024-06-17 13:24:44 +0000 UTC Type:0 Mac:52:54:00:d6:f7:ee Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:custom-flannel-253383 Clientid:01:52:54:00:d6:f7:ee}
	I0617 12:24:51.610566  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined IP address 192.168.61.105 and MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:51.610754  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHPort
	I0617 12:24:51.610974  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHKeyPath
	I0617 12:24:51.611181  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHKeyPath
	I0617 12:24:51.611307  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHUsername
	I0617 12:24:51.611511  176137 main.go:141] libmachine: Using SSH client type: native
	I0617 12:24:51.611722  176137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I0617 12:24:51.611739  176137 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-253383' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-253383/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-253383' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:24:51.734631  176137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:24:51.734675  176137 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:24:51.734698  176137 buildroot.go:174] setting up certificates
	I0617 12:24:51.734711  176137 provision.go:84] configureAuth start
	I0617 12:24:51.734726  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetMachineName
	I0617 12:24:51.735068  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetIP
	I0617 12:24:51.738293  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:51.738624  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f7:ee", ip: ""} in network mk-custom-flannel-253383: {Iface:virbr3 ExpiryTime:2024-06-17 13:24:44 +0000 UTC Type:0 Mac:52:54:00:d6:f7:ee Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:custom-flannel-253383 Clientid:01:52:54:00:d6:f7:ee}
	I0617 12:24:51.738655  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined IP address 192.168.61.105 and MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:51.738779  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHHostname
	I0617 12:24:51.741035  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:51.741352  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f7:ee", ip: ""} in network mk-custom-flannel-253383: {Iface:virbr3 ExpiryTime:2024-06-17 13:24:44 +0000 UTC Type:0 Mac:52:54:00:d6:f7:ee Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:custom-flannel-253383 Clientid:01:52:54:00:d6:f7:ee}
	I0617 12:24:51.741394  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined IP address 192.168.61.105 and MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:51.741497  176137 provision.go:143] copyHostCerts
	I0617 12:24:51.741607  176137 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:24:51.741623  176137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:24:51.741692  176137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:24:51.741800  176137 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:24:51.741813  176137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:24:51.741843  176137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:24:51.741901  176137 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:24:51.741908  176137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:24:51.741926  176137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:24:51.741969  176137 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-253383 san=[127.0.0.1 192.168.61.105 custom-flannel-253383 localhost minikube]
	I0617 12:24:51.939949  176137 provision.go:177] copyRemoteCerts
	I0617 12:24:51.940013  176137 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:24:51.940043  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHHostname
	I0617 12:24:51.942470  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:51.942902  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f7:ee", ip: ""} in network mk-custom-flannel-253383: {Iface:virbr3 ExpiryTime:2024-06-17 13:24:44 +0000 UTC Type:0 Mac:52:54:00:d6:f7:ee Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:custom-flannel-253383 Clientid:01:52:54:00:d6:f7:ee}
	I0617 12:24:51.942938  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined IP address 192.168.61.105 and MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:51.943220  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHPort
	I0617 12:24:51.943429  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHKeyPath
	I0617 12:24:51.943604  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHUsername
	I0617 12:24:51.943790  176137 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/custom-flannel-253383/id_rsa Username:docker}
	I0617 12:24:52.031999  176137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0617 12:24:52.058892  176137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:24:52.085612  176137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:24:52.110955  176137 provision.go:87] duration metric: took 376.223003ms to configureAuth
	I0617 12:24:52.111002  176137 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:24:52.111164  176137 config.go:182] Loaded profile config "custom-flannel-253383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:24:52.111249  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHHostname
	I0617 12:24:52.114158  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:52.114519  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f7:ee", ip: ""} in network mk-custom-flannel-253383: {Iface:virbr3 ExpiryTime:2024-06-17 13:24:44 +0000 UTC Type:0 Mac:52:54:00:d6:f7:ee Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:custom-flannel-253383 Clientid:01:52:54:00:d6:f7:ee}
	I0617 12:24:52.114549  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined IP address 192.168.61.105 and MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:52.114791  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHPort
	I0617 12:24:52.114986  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHKeyPath
	I0617 12:24:52.115132  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHKeyPath
	I0617 12:24:52.115340  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHUsername
	I0617 12:24:52.115609  176137 main.go:141] libmachine: Using SSH client type: native
	I0617 12:24:52.115809  176137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I0617 12:24:52.115830  176137 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:24:52.404101  176137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:24:52.404132  176137 main.go:141] libmachine: Checking connection to Docker...
	I0617 12:24:52.404143  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetURL
	I0617 12:24:52.405514  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | Using libvirt version 6000000
	I0617 12:24:52.407757  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:52.408179  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f7:ee", ip: ""} in network mk-custom-flannel-253383: {Iface:virbr3 ExpiryTime:2024-06-17 13:24:44 +0000 UTC Type:0 Mac:52:54:00:d6:f7:ee Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:custom-flannel-253383 Clientid:01:52:54:00:d6:f7:ee}
	I0617 12:24:52.408211  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined IP address 192.168.61.105 and MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:52.408403  176137 main.go:141] libmachine: Docker is up and running!
	I0617 12:24:52.408419  176137 main.go:141] libmachine: Reticulating splines...
	I0617 12:24:52.408426  176137 client.go:171] duration metric: took 24.358362357s to LocalClient.Create
	I0617 12:24:52.408451  176137 start.go:167] duration metric: took 24.358430343s to libmachine.API.Create "custom-flannel-253383"
	I0617 12:24:52.408466  176137 start.go:293] postStartSetup for "custom-flannel-253383" (driver="kvm2")
	I0617 12:24:52.408481  176137 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:24:52.408498  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .DriverName
	I0617 12:24:52.408757  176137 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:24:52.408780  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHHostname
	I0617 12:24:52.411100  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:52.411498  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f7:ee", ip: ""} in network mk-custom-flannel-253383: {Iface:virbr3 ExpiryTime:2024-06-17 13:24:44 +0000 UTC Type:0 Mac:52:54:00:d6:f7:ee Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:custom-flannel-253383 Clientid:01:52:54:00:d6:f7:ee}
	I0617 12:24:52.411541  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined IP address 192.168.61.105 and MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:52.411797  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHPort
	I0617 12:24:52.411993  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHKeyPath
	I0617 12:24:52.412173  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHUsername
	I0617 12:24:52.412346  176137 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/custom-flannel-253383/id_rsa Username:docker}
	I0617 12:24:52.498808  176137 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:24:52.503387  176137 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:24:52.503415  176137 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:24:52.503496  176137 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:24:52.503577  176137 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:24:52.503682  176137 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:24:52.514000  176137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:24:52.540054  176137 start.go:296] duration metric: took 131.569938ms for postStartSetup
	I0617 12:24:52.540116  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetConfigRaw
	I0617 12:24:52.540795  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetIP
	I0617 12:24:52.543554  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:52.543988  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f7:ee", ip: ""} in network mk-custom-flannel-253383: {Iface:virbr3 ExpiryTime:2024-06-17 13:24:44 +0000 UTC Type:0 Mac:52:54:00:d6:f7:ee Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:custom-flannel-253383 Clientid:01:52:54:00:d6:f7:ee}
	I0617 12:24:52.544026  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined IP address 192.168.61.105 and MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:52.544333  176137 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/custom-flannel-253383/config.json ...
	I0617 12:24:52.544588  176137 start.go:128] duration metric: took 24.519706954s to createHost
	I0617 12:24:52.544618  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHHostname
	I0617 12:24:52.546915  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:52.547291  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f7:ee", ip: ""} in network mk-custom-flannel-253383: {Iface:virbr3 ExpiryTime:2024-06-17 13:24:44 +0000 UTC Type:0 Mac:52:54:00:d6:f7:ee Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:custom-flannel-253383 Clientid:01:52:54:00:d6:f7:ee}
	I0617 12:24:52.547319  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined IP address 192.168.61.105 and MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:52.547428  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHPort
	I0617 12:24:52.547656  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHKeyPath
	I0617 12:24:52.547830  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHKeyPath
	I0617 12:24:52.547969  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHUsername
	I0617 12:24:52.548128  176137 main.go:141] libmachine: Using SSH client type: native
	I0617 12:24:52.548290  176137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I0617 12:24:52.548301  176137 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:24:52.660365  176137 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718627092.613677812
	
	I0617 12:24:52.660392  176137 fix.go:216] guest clock: 1718627092.613677812
	I0617 12:24:52.660402  176137 fix.go:229] Guest: 2024-06-17 12:24:52.613677812 +0000 UTC Remote: 2024-06-17 12:24:52.544601116 +0000 UTC m=+42.825431716 (delta=69.076696ms)
	I0617 12:24:52.660431  176137 fix.go:200] guest clock delta is within tolerance: 69.076696ms
	I0617 12:24:52.660438  176137 start.go:83] releasing machines lock for "custom-flannel-253383", held for 24.635729239s
	I0617 12:24:52.660471  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .DriverName
	I0617 12:24:52.660807  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetIP
	I0617 12:24:52.664401  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:52.664925  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f7:ee", ip: ""} in network mk-custom-flannel-253383: {Iface:virbr3 ExpiryTime:2024-06-17 13:24:44 +0000 UTC Type:0 Mac:52:54:00:d6:f7:ee Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:custom-flannel-253383 Clientid:01:52:54:00:d6:f7:ee}
	I0617 12:24:52.664951  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined IP address 192.168.61.105 and MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:52.665178  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .DriverName
	I0617 12:24:52.665983  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .DriverName
	I0617 12:24:52.666229  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .DriverName
	I0617 12:24:52.666357  176137 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:24:52.666422  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHHostname
	I0617 12:24:52.666521  176137 ssh_runner.go:195] Run: cat /version.json
	I0617 12:24:52.666547  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHHostname
	I0617 12:24:52.669817  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:52.669959  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:52.670281  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f7:ee", ip: ""} in network mk-custom-flannel-253383: {Iface:virbr3 ExpiryTime:2024-06-17 13:24:44 +0000 UTC Type:0 Mac:52:54:00:d6:f7:ee Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:custom-flannel-253383 Clientid:01:52:54:00:d6:f7:ee}
	I0617 12:24:52.670311  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined IP address 192.168.61.105 and MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:52.670345  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f7:ee", ip: ""} in network mk-custom-flannel-253383: {Iface:virbr3 ExpiryTime:2024-06-17 13:24:44 +0000 UTC Type:0 Mac:52:54:00:d6:f7:ee Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:custom-flannel-253383 Clientid:01:52:54:00:d6:f7:ee}
	I0617 12:24:52.670365  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined IP address 192.168.61.105 and MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:52.670591  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHPort
	I0617 12:24:52.670617  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHPort
	I0617 12:24:52.670835  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHKeyPath
	I0617 12:24:52.670854  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHKeyPath
	I0617 12:24:52.671098  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHUsername
	I0617 12:24:52.671135  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetSSHUsername
	I0617 12:24:52.671264  176137 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/custom-flannel-253383/id_rsa Username:docker}
	I0617 12:24:52.671352  176137 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/custom-flannel-253383/id_rsa Username:docker}
	I0617 12:24:52.789115  176137 ssh_runner.go:195] Run: systemctl --version
	I0617 12:24:52.795882  176137 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:24:52.964530  176137 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:24:52.971443  176137 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:24:52.971568  176137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:24:52.988658  176137 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:24:52.988695  176137 start.go:494] detecting cgroup driver to use...
	I0617 12:24:52.988790  176137 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:24:53.006470  176137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:24:53.022828  176137 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:24:53.022890  176137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:24:53.039654  176137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:24:53.054631  176137 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:24:53.195119  176137 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:24:53.341306  176137 docker.go:233] disabling docker service ...
	I0617 12:24:53.341369  176137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:24:53.361685  176137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:24:53.377833  176137 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:24:53.519447  176137 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:24:53.679983  176137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:24:53.696859  176137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:24:53.720360  176137 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:24:53.720431  176137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:24:53.731360  176137 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:24:53.731596  176137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:24:53.754716  176137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:24:53.767443  176137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:24:53.780515  176137 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:24:53.793562  176137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:24:53.806423  176137 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:24:53.827269  176137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:24:53.842701  176137 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:24:53.855256  176137 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:24:53.855357  176137 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:24:53.872578  176137 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:24:53.883557  176137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:24:54.025669  176137 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:24:54.181478  176137 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:24:54.181555  176137 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:24:54.187435  176137 start.go:562] Will wait 60s for crictl version
	I0617 12:24:54.187538  176137 ssh_runner.go:195] Run: which crictl
	I0617 12:24:54.192845  176137 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:24:54.244131  176137 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:24:54.244225  176137 ssh_runner.go:195] Run: crio --version
	I0617 12:24:54.281450  176137 ssh_runner.go:195] Run: crio --version
	I0617 12:24:54.316632  176137 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:24:54.318243  176137 main.go:141] libmachine: (custom-flannel-253383) Calling .GetIP
	I0617 12:24:54.321467  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:54.322025  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f7:ee", ip: ""} in network mk-custom-flannel-253383: {Iface:virbr3 ExpiryTime:2024-06-17 13:24:44 +0000 UTC Type:0 Mac:52:54:00:d6:f7:ee Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:custom-flannel-253383 Clientid:01:52:54:00:d6:f7:ee}
	I0617 12:24:54.322061  176137 main.go:141] libmachine: (custom-flannel-253383) DBG | domain custom-flannel-253383 has defined IP address 192.168.61.105 and MAC address 52:54:00:d6:f7:ee in network mk-custom-flannel-253383
	I0617 12:24:54.322328  176137 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0617 12:24:54.327039  176137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:24:54.341191  176137 kubeadm.go:877] updating cluster {Name:custom-flannel-253383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.30.1 ClusterName:custom-flannel-253383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:24:54.341313  176137 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:24:54.341357  176137 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:24:54.388563  176137 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:24:54.388645  176137 ssh_runner.go:195] Run: which lz4
	I0617 12:24:54.393564  176137 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:24:54.398579  176137 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:24:54.398620  176137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0617 12:24:50.650973  175636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:24:51.151181  175636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:24:51.650711  175636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:24:52.150830  175636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:24:52.650922  175636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:24:53.150252  175636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:24:53.650383  175636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:24:54.150836  175636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:24:54.650742  175636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:24:55.150243  175636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.824475160Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718627097824436485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a20ec01-c648-4480-846e-05b42fc47b74 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.825877870Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=effc3e82-684e-4b7a-80d7-be455a85f784 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.825984570Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=effc3e82-684e-4b7a-80d7-be455a85f784 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.826374811Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfd335e5e905ceb4a84958b887f1f87c485fa58b5c2410528667b4584437377d,PodSandboxId:4ec5e51e33e3cedc6aefb9c3ee5d6391210baed29b05fc84acc385a62d4ad61f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718625759784458039,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30d10d01-c1de-435f-902e-5e90c86ab3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6d8e5583,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323,PodSandboxId:8a1b06c7196ef98910e1fd1444bc7cfe4dc58d4a078332029874c3879df5045b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718625758616393839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnw24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6c4ff3-f0dc-43da-abd8-baaed7dca40c,},Annotations:map[string]string{io.kubernetes.container.hash: a431f7a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195,PodSandboxId:e961aee4065637077a8ce4e59e5627f0c51458c18464ffd5d60b15f46a7b95aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718625743613384802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 92b20aec-29c2-4256-86be-7f58f66585dd,},Annotations:map[string]string{io.kubernetes.container.hash: 5155bfb6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da,PodSandboxId:0a7b4f113755c29d14cf67df0a593ef5c83b50b92ed3fa26a93a3fe94024b925,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718625742911563496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn5kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6935148-7
ee8-4655-8327-9f1ee4c933de,},Annotations:map[string]string{io.kubernetes.container.hash: ebf4cc3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc,PodSandboxId:e961aee4065637077a8ce4e59e5627f0c51458c18464ffd5d60b15f46a7b95aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718625742905247251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92b20aec-29c2-4256-86be
-7f58f66585dd,},Annotations:map[string]string{io.kubernetes.container.hash: 5155bfb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b,PodSandboxId:a3cec7d877da2c73dcc9614f367bf8f5a3f7d0a1d73be53db582ceb404b2d8d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625739247357549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c21bea80d5b9dcade35da
7b7545e61c7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685,PodSandboxId:8753042e3940c09ad40880a7040acf9ff18b04ea81902bfc864efb03cc277e8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625739221340680,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: aef2b9c920bd8998bd8f0b63747752dd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b,PodSandboxId:37d220d03ff98c32e8150017bc155aae33fc8cc0a551400e287958d263b84f70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625739177321487,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 85585af84dc6cf60f33336c0a1c5a11f,},Annotations:map[string]string{io.kubernetes.container.hash: 90b31d22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862,PodSandboxId:1835d921c3e05def4cdc131d68f2cbdd34f27229844719a02a01ea4f9bd5cbee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625739152392152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e049b2796061913144bf89c1454f5
f9,},Annotations:map[string]string{io.kubernetes.container.hash: fafef5fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=effc3e82-684e-4b7a-80d7-be455a85f784 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.884208934Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8368d4be-9d22-441c-96ad-899f8182a449 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.884305262Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8368d4be-9d22-441c-96ad-899f8182a449 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.886495327Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd296015-ed46-4538-9213-0ac10ead6837 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.887444957Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718627097887405672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd296015-ed46-4538-9213-0ac10ead6837 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.887986959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a52355e-ae51-4686-bf91-41ef1b095f09 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.888060649Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a52355e-ae51-4686-bf91-41ef1b095f09 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.888459511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfd335e5e905ceb4a84958b887f1f87c485fa58b5c2410528667b4584437377d,PodSandboxId:4ec5e51e33e3cedc6aefb9c3ee5d6391210baed29b05fc84acc385a62d4ad61f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718625759784458039,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30d10d01-c1de-435f-902e-5e90c86ab3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6d8e5583,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323,PodSandboxId:8a1b06c7196ef98910e1fd1444bc7cfe4dc58d4a078332029874c3879df5045b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718625758616393839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnw24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6c4ff3-f0dc-43da-abd8-baaed7dca40c,},Annotations:map[string]string{io.kubernetes.container.hash: a431f7a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195,PodSandboxId:e961aee4065637077a8ce4e59e5627f0c51458c18464ffd5d60b15f46a7b95aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718625743613384802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 92b20aec-29c2-4256-86be-7f58f66585dd,},Annotations:map[string]string{io.kubernetes.container.hash: 5155bfb6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da,PodSandboxId:0a7b4f113755c29d14cf67df0a593ef5c83b50b92ed3fa26a93a3fe94024b925,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718625742911563496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn5kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6935148-7
ee8-4655-8327-9f1ee4c933de,},Annotations:map[string]string{io.kubernetes.container.hash: ebf4cc3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc,PodSandboxId:e961aee4065637077a8ce4e59e5627f0c51458c18464ffd5d60b15f46a7b95aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718625742905247251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92b20aec-29c2-4256-86be
-7f58f66585dd,},Annotations:map[string]string{io.kubernetes.container.hash: 5155bfb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b,PodSandboxId:a3cec7d877da2c73dcc9614f367bf8f5a3f7d0a1d73be53db582ceb404b2d8d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625739247357549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c21bea80d5b9dcade35da
7b7545e61c7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685,PodSandboxId:8753042e3940c09ad40880a7040acf9ff18b04ea81902bfc864efb03cc277e8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625739221340680,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: aef2b9c920bd8998bd8f0b63747752dd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b,PodSandboxId:37d220d03ff98c32e8150017bc155aae33fc8cc0a551400e287958d263b84f70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625739177321487,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 85585af84dc6cf60f33336c0a1c5a11f,},Annotations:map[string]string{io.kubernetes.container.hash: 90b31d22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862,PodSandboxId:1835d921c3e05def4cdc131d68f2cbdd34f27229844719a02a01ea4f9bd5cbee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625739152392152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e049b2796061913144bf89c1454f5
f9,},Annotations:map[string]string{io.kubernetes.container.hash: fafef5fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a52355e-ae51-4686-bf91-41ef1b095f09 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.935617599Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=05db75dc-4425-4bcd-ac9e-16dc06e43f1c name=/runtime.v1.RuntimeService/Version
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.935704815Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=05db75dc-4425-4bcd-ac9e-16dc06e43f1c name=/runtime.v1.RuntimeService/Version
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.937490474Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21c9a4c5-d8ee-493c-93f0-6448a9bfe2b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.938680123Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718627097938065351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21c9a4c5-d8ee-493c-93f0-6448a9bfe2b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.939491346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57190887-c494-4b0a-b265-49f95e4e107a name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.939565414Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57190887-c494-4b0a-b265-49f95e4e107a name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.939846870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfd335e5e905ceb4a84958b887f1f87c485fa58b5c2410528667b4584437377d,PodSandboxId:4ec5e51e33e3cedc6aefb9c3ee5d6391210baed29b05fc84acc385a62d4ad61f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718625759784458039,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30d10d01-c1de-435f-902e-5e90c86ab3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6d8e5583,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323,PodSandboxId:8a1b06c7196ef98910e1fd1444bc7cfe4dc58d4a078332029874c3879df5045b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718625758616393839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnw24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6c4ff3-f0dc-43da-abd8-baaed7dca40c,},Annotations:map[string]string{io.kubernetes.container.hash: a431f7a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195,PodSandboxId:e961aee4065637077a8ce4e59e5627f0c51458c18464ffd5d60b15f46a7b95aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718625743613384802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 92b20aec-29c2-4256-86be-7f58f66585dd,},Annotations:map[string]string{io.kubernetes.container.hash: 5155bfb6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da,PodSandboxId:0a7b4f113755c29d14cf67df0a593ef5c83b50b92ed3fa26a93a3fe94024b925,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718625742911563496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn5kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6935148-7
ee8-4655-8327-9f1ee4c933de,},Annotations:map[string]string{io.kubernetes.container.hash: ebf4cc3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc,PodSandboxId:e961aee4065637077a8ce4e59e5627f0c51458c18464ffd5d60b15f46a7b95aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718625742905247251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92b20aec-29c2-4256-86be
-7f58f66585dd,},Annotations:map[string]string{io.kubernetes.container.hash: 5155bfb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b,PodSandboxId:a3cec7d877da2c73dcc9614f367bf8f5a3f7d0a1d73be53db582ceb404b2d8d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625739247357549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c21bea80d5b9dcade35da
7b7545e61c7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685,PodSandboxId:8753042e3940c09ad40880a7040acf9ff18b04ea81902bfc864efb03cc277e8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625739221340680,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: aef2b9c920bd8998bd8f0b63747752dd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b,PodSandboxId:37d220d03ff98c32e8150017bc155aae33fc8cc0a551400e287958d263b84f70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625739177321487,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 85585af84dc6cf60f33336c0a1c5a11f,},Annotations:map[string]string{io.kubernetes.container.hash: 90b31d22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862,PodSandboxId:1835d921c3e05def4cdc131d68f2cbdd34f27229844719a02a01ea4f9bd5cbee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625739152392152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e049b2796061913144bf89c1454f5
f9,},Annotations:map[string]string{io.kubernetes.container.hash: fafef5fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=57190887-c494-4b0a-b265-49f95e4e107a name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.983683158Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a16ff2b0-6905-4357-84d1-eaf6b1d830b8 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.983855051Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a16ff2b0-6905-4357-84d1-eaf6b1d830b8 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.986026596Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3604897a-ab1c-4dd2-bc85-e3cbc1229b1b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.990907094Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718627097990867663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3604897a-ab1c-4dd2-bc85-e3cbc1229b1b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.991866095Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3faac663-b511-4bf6-99e3-b5e4cb06f42f name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.991927882Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3faac663-b511-4bf6-99e3-b5e4cb06f42f name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:24:57 default-k8s-diff-port-991309 crio[730]: time="2024-06-17 12:24:57.992192797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfd335e5e905ceb4a84958b887f1f87c485fa58b5c2410528667b4584437377d,PodSandboxId:4ec5e51e33e3cedc6aefb9c3ee5d6391210baed29b05fc84acc385a62d4ad61f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718625759784458039,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30d10d01-c1de-435f-902e-5e90c86ab3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6d8e5583,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323,PodSandboxId:8a1b06c7196ef98910e1fd1444bc7cfe4dc58d4a078332029874c3879df5045b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718625758616393839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnw24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6c4ff3-f0dc-43da-abd8-baaed7dca40c,},Annotations:map[string]string{io.kubernetes.container.hash: a431f7a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195,PodSandboxId:e961aee4065637077a8ce4e59e5627f0c51458c18464ffd5d60b15f46a7b95aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718625743613384802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 92b20aec-29c2-4256-86be-7f58f66585dd,},Annotations:map[string]string{io.kubernetes.container.hash: 5155bfb6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da,PodSandboxId:0a7b4f113755c29d14cf67df0a593ef5c83b50b92ed3fa26a93a3fe94024b925,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718625742911563496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn5kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6935148-7
ee8-4655-8327-9f1ee4c933de,},Annotations:map[string]string{io.kubernetes.container.hash: ebf4cc3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc,PodSandboxId:e961aee4065637077a8ce4e59e5627f0c51458c18464ffd5d60b15f46a7b95aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718625742905247251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92b20aec-29c2-4256-86be
-7f58f66585dd,},Annotations:map[string]string{io.kubernetes.container.hash: 5155bfb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b,PodSandboxId:a3cec7d877da2c73dcc9614f367bf8f5a3f7d0a1d73be53db582ceb404b2d8d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718625739247357549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c21bea80d5b9dcade35da
7b7545e61c7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685,PodSandboxId:8753042e3940c09ad40880a7040acf9ff18b04ea81902bfc864efb03cc277e8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718625739221340680,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: aef2b9c920bd8998bd8f0b63747752dd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b,PodSandboxId:37d220d03ff98c32e8150017bc155aae33fc8cc0a551400e287958d263b84f70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718625739177321487,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 85585af84dc6cf60f33336c0a1c5a11f,},Annotations:map[string]string{io.kubernetes.container.hash: 90b31d22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862,PodSandboxId:1835d921c3e05def4cdc131d68f2cbdd34f27229844719a02a01ea4f9bd5cbee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718625739152392152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-991309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e049b2796061913144bf89c1454f5
f9,},Annotations:map[string]string{io.kubernetes.container.hash: fafef5fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3faac663-b511-4bf6-99e3-b5e4cb06f42f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dfd335e5e905c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   4ec5e51e33e3c       busybox
	26b8e036867db       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      22 minutes ago      Running             coredns                   1                   8a1b06c7196ef       coredns-7db6d8ff4d-mnw24
	adb0f4294c844       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Running             storage-provisioner       3                   e961aee406563       storage-provisioner
	63dba5e023e5a       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      22 minutes ago      Running             kube-proxy                1                   0a7b4f113755c       kube-proxy-jn5kp
	e1a38df1bc100       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       2                   e961aee406563       storage-provisioner
	2fc9bd2867376       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      22 minutes ago      Running             kube-scheduler            1                   a3cec7d877da2       kube-scheduler-default-k8s-diff-port-991309
	36ad2102b1a13       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      22 minutes ago      Running             kube-controller-manager   1                   8753042e3940c       kube-controller-manager-default-k8s-diff-port-991309
	5b11bf1d6c96b       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      22 minutes ago      Running             kube-apiserver            1                   37d220d03ff98       kube-apiserver-default-k8s-diff-port-991309
	8bfeb1ae74a6b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      22 minutes ago      Running             etcd                      1                   1835d921c3e05       etcd-default-k8s-diff-port-991309
	
	
	==> coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58862 - 59428 "HINFO IN 3347879279322849397.3803459997896774640. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01204344s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-991309
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-991309
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=default-k8s-diff-port-991309
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T11_56_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:56:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-991309
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 12:24:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 12:23:17 +0000   Mon, 17 Jun 2024 11:56:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 12:23:17 +0000   Mon, 17 Jun 2024 11:56:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 12:23:17 +0000   Mon, 17 Jun 2024 11:56:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 12:23:17 +0000   Mon, 17 Jun 2024 12:02:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.125
	  Hostname:    default-k8s-diff-port-991309
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d6f992fe6fb94accb2f426c01d5d0f61
	  System UUID:                d6f992fe-6fb9-4acc-b2f4-26c01d5d0f61
	  Boot ID:                    3ae063a7-6d55-4793-bbc5-8b4530650f29
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-mnw24                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-default-k8s-diff-port-991309                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-default-k8s-diff-port-991309             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-991309    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-jn5kp                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-991309             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-569cc877fc-n2svp                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m                kubelet          Node default-k8s-diff-port-991309 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node default-k8s-diff-port-991309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node default-k8s-diff-port-991309 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node default-k8s-diff-port-991309 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-991309 event: Registered Node default-k8s-diff-port-991309 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-991309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-991309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-991309 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-991309 event: Registered Node default-k8s-diff-port-991309 in Controller
	
	
	==> dmesg <==
	[Jun17 12:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051927] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044623] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.893865] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jun17 12:02] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.630374] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.239911] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.069500] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061337] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.218912] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.143389] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.293601] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +4.533291] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
	[  +0.055286] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.205283] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +4.635719] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.397905] systemd-fstab-generator[1611]: Ignoring "noauto" option for root device
	[  +5.303183] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.361973] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] <==
	{"level":"info","ts":"2024-06-17T12:12:20.726824Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":824}
	{"level":"info","ts":"2024-06-17T12:12:20.736976Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":824,"took":"9.765675ms","hash":2281465834,"current-db-size-bytes":2592768,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2592768,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-06-17T12:12:20.737031Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2281465834,"revision":824,"compact-revision":-1}
	{"level":"info","ts":"2024-06-17T12:17:20.743145Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1067}
	{"level":"info","ts":"2024-06-17T12:17:20.748269Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1067,"took":"4.355934ms","hash":3752907721,"current-db-size-bytes":2592768,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1531904,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-06-17T12:17:20.748427Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3752907721,"revision":1067,"compact-revision":824}
	{"level":"info","ts":"2024-06-17T12:22:20.749782Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1311}
	{"level":"info","ts":"2024-06-17T12:22:20.753714Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1311,"took":"3.635314ms","hash":3782386684,"current-db-size-bytes":2592768,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1519616,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-06-17T12:22:20.753764Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3782386684,"revision":1311,"compact-revision":1067}
	{"level":"info","ts":"2024-06-17T12:22:37.504226Z","caller":"traceutil/trace.go:171","msg":"trace[1276686032] transaction","detail":"{read_only:false; response_revision:1569; number_of_response:1; }","duration":"128.451097ms","start":"2024-06-17T12:22:37.375736Z","end":"2024-06-17T12:22:37.504188Z","steps":["trace[1276686032] 'process raft request'  (duration: 128.062013ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T12:22:37.751581Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.029938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-17T12:22:37.752393Z","caller":"traceutil/trace.go:171","msg":"trace[697157299] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:0; response_revision:1569; }","duration":"108.938445ms","start":"2024-06-17T12:22:37.643389Z","end":"2024-06-17T12:22:37.752328Z","steps":["trace[697157299] 'count revisions from in-memory index tree'  (duration: 107.92802ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T12:22:38.779269Z","caller":"traceutil/trace.go:171","msg":"trace[307838757] linearizableReadLoop","detail":"{readStateIndex:1855; appliedIndex:1854; }","duration":"114.742417ms","start":"2024-06-17T12:22:38.66451Z","end":"2024-06-17T12:22:38.779252Z","steps":["trace[307838757] 'read index received'  (duration: 50.754528ms)","trace[307838757] 'applied index is now lower than readState.Index'  (duration: 63.986902ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-17T12:22:38.779374Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.838609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-17T12:22:38.779444Z","caller":"traceutil/trace.go:171","msg":"trace[1915558780] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1569; }","duration":"114.927882ms","start":"2024-06-17T12:22:38.664504Z","end":"2024-06-17T12:22:38.779432Z","steps":["trace[1915558780] 'agreement among raft nodes before linearized reading'  (duration: 114.803292ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T12:23:06.211267Z","caller":"traceutil/trace.go:171","msg":"trace[2094938597] transaction","detail":"{read_only:false; response_revision:1591; number_of_response:1; }","duration":"244.438509ms","start":"2024-06-17T12:23:05.966811Z","end":"2024-06-17T12:23:06.21125Z","steps":["trace[2094938597] 'process raft request'  (duration: 244.285216ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T12:23:06.252944Z","caller":"traceutil/trace.go:171","msg":"trace[1117216658] transaction","detail":"{read_only:false; response_revision:1592; number_of_response:1; }","duration":"116.014296ms","start":"2024-06-17T12:23:06.13691Z","end":"2024-06-17T12:23:06.252924Z","steps":["trace[1117216658] 'process raft request'  (duration: 115.870628ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T12:23:31.983429Z","caller":"traceutil/trace.go:171","msg":"trace[818950902] transaction","detail":"{read_only:false; response_revision:1612; number_of_response:1; }","duration":"145.290915ms","start":"2024-06-17T12:23:31.838095Z","end":"2024-06-17T12:23:31.983386Z","steps":["trace[818950902] 'process raft request'  (duration: 145.107907ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-17T12:23:33.856653Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.887386ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8080460649645429162 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.125\" mod_revision:1605 > success:<request_put:<key:\"/registry/masterleases/192.168.50.125\" value_size:68 lease:8080460649645429160 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.125\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-17T12:23:33.856764Z","caller":"traceutil/trace.go:171","msg":"trace[1416689248] linearizableReadLoop","detail":"{readStateIndex:1910; appliedIndex:1909; }","duration":"187.374923ms","start":"2024-06-17T12:23:33.669376Z","end":"2024-06-17T12:23:33.856751Z","steps":["trace[1416689248] 'read index received'  (duration: 14.143801ms)","trace[1416689248] 'applied index is now lower than readState.Index'  (duration: 173.230148ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-17T12:23:33.856862Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.475954ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-17T12:23:33.856883Z","caller":"traceutil/trace.go:171","msg":"trace[1726316123] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1613; }","duration":"187.531522ms","start":"2024-06-17T12:23:33.669344Z","end":"2024-06-17T12:23:33.856876Z","steps":["trace[1726316123] 'agreement among raft nodes before linearized reading'  (duration: 187.446358ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-17T12:23:33.856947Z","caller":"traceutil/trace.go:171","msg":"trace[1988423294] transaction","detail":"{read_only:false; response_revision:1613; number_of_response:1; }","duration":"233.836674ms","start":"2024-06-17T12:23:33.623094Z","end":"2024-06-17T12:23:33.85693Z","steps":["trace[1988423294] 'process raft request'  (duration: 60.479588ms)","trace[1988423294] 'compare'  (duration: 171.764193ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-17T12:23:53.920764Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.342222ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-17T12:23:53.920921Z","caller":"traceutil/trace.go:171","msg":"trace[11447914] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1630; }","duration":"250.581449ms","start":"2024-06-17T12:23:53.670306Z","end":"2024-06-17T12:23:53.920887Z","steps":["trace[11447914] 'range keys from in-memory index tree'  (duration: 250.296179ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:24:58 up 23 min,  0 users,  load average: 0.09, 0.17, 0.17
	Linux default-k8s-diff-port-991309 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] <==
	I0617 12:18:23.283311       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:20:23.281768       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:20:23.282150       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:20:23.282209       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:20:23.284030       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:20:23.284080       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0617 12:20:23.284092       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:22:22.285661       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:22:22.286011       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0617 12:22:23.286432       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:22:23.286509       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:22:23.286516       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:22:23.286572       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:22:23.286610       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0617 12:22:23.287819       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:23:23.286909       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:23:23.287081       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:23:23.287163       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:23:23.288976       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:23:23.289063       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0617 12:23:23.289091       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] <==
	I0617 12:19:06.106290       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:19:35.554236       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:19:36.113857       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:20:05.562267       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:20:06.124568       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:20:35.568240       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:20:36.132524       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:21:05.573632       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:21:06.140327       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:21:35.579085       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:21:36.148394       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:22:05.584739       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:22:06.156510       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:22:35.591103       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:22:36.169601       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:23:05.596677       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:23:06.178609       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:23:35.602204       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:23:36.193272       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0617 12:23:45.540811       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="238.376µs"
	I0617 12:23:58.552638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="1.007395ms"
	E0617 12:24:05.609050       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:24:06.201536       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:24:35.620081       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:24:36.213741       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] <==
	I0617 12:02:23.105039       1 server_linux.go:69] "Using iptables proxy"
	I0617 12:02:23.116277       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.125"]
	I0617 12:02:23.173374       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0617 12:02:23.173438       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0617 12:02:23.173463       1 server_linux.go:165] "Using iptables Proxier"
	I0617 12:02:23.182794       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 12:02:23.183020       1 server.go:872] "Version info" version="v1.30.1"
	I0617 12:02:23.183064       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 12:02:23.186035       1 config.go:192] "Starting service config controller"
	I0617 12:02:23.187924       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 12:02:23.187997       1 config.go:101] "Starting endpoint slice config controller"
	I0617 12:02:23.188027       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0617 12:02:23.189981       1 config.go:319] "Starting node config controller"
	I0617 12:02:23.190013       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 12:02:23.288176       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0617 12:02:23.290309       1 shared_informer.go:320] Caches are synced for node config
	I0617 12:02:23.291590       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] <==
	W0617 12:02:22.248006       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0617 12:02:22.248017       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0617 12:02:22.248150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0617 12:02:22.248182       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0617 12:02:22.248271       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0617 12:02:22.248299       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0617 12:02:22.248359       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0617 12:02:22.248368       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0617 12:02:22.248528       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0617 12:02:22.248557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0617 12:02:22.248605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0617 12:02:22.248631       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0617 12:02:22.248684       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0617 12:02:22.248729       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0617 12:02:22.248771       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0617 12:02:22.248780       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0617 12:02:22.248851       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0617 12:02:22.248879       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0617 12:02:22.248922       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0617 12:02:22.248931       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0617 12:02:22.249029       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0617 12:02:22.249056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0617 12:02:22.249066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0617 12:02:22.249073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0617 12:02:23.640019       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 17 12:22:26 default-k8s-diff-port-991309 kubelet[942]: E0617 12:22:26.521065     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:22:41 default-k8s-diff-port-991309 kubelet[942]: E0617 12:22:41.521185     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:22:55 default-k8s-diff-port-991309 kubelet[942]: E0617 12:22:55.521066     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:23:06 default-k8s-diff-port-991309 kubelet[942]: E0617 12:23:06.521857     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:23:18 default-k8s-diff-port-991309 kubelet[942]: E0617 12:23:18.542544     942 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 12:23:18 default-k8s-diff-port-991309 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 12:23:18 default-k8s-diff-port-991309 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 12:23:18 default-k8s-diff-port-991309 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 12:23:18 default-k8s-diff-port-991309 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 12:23:19 default-k8s-diff-port-991309 kubelet[942]: E0617 12:23:19.520318     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:23:31 default-k8s-diff-port-991309 kubelet[942]: E0617 12:23:31.545084     942 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 17 12:23:31 default-k8s-diff-port-991309 kubelet[942]: E0617 12:23:31.545598     942 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 17 12:23:31 default-k8s-diff-port-991309 kubelet[942]: E0617 12:23:31.546549     942 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6q7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-n2svp_kube-system(5b637d97-3183-4324-98cf-dd69a2968578): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jun 17 12:23:31 default-k8s-diff-port-991309 kubelet[942]: E0617 12:23:31.546783     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:23:45 default-k8s-diff-port-991309 kubelet[942]: E0617 12:23:45.521105     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:23:58 default-k8s-diff-port-991309 kubelet[942]: E0617 12:23:58.520862     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:24:13 default-k8s-diff-port-991309 kubelet[942]: E0617 12:24:13.520488     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:24:18 default-k8s-diff-port-991309 kubelet[942]: E0617 12:24:18.545021     942 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 12:24:18 default-k8s-diff-port-991309 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 12:24:18 default-k8s-diff-port-991309 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 12:24:18 default-k8s-diff-port-991309 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 12:24:18 default-k8s-diff-port-991309 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 12:24:25 default-k8s-diff-port-991309 kubelet[942]: E0617 12:24:25.519773     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:24:38 default-k8s-diff-port-991309 kubelet[942]: E0617 12:24:38.520721     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	Jun 17 12:24:50 default-k8s-diff-port-991309 kubelet[942]: E0617 12:24:50.520503     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-n2svp" podUID="5b637d97-3183-4324-98cf-dd69a2968578"
	
	
	==> storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] <==
	I0617 12:02:23.766191       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0617 12:02:23.788529       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0617 12:02:23.788663       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0617 12:02:41.197444       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0617 12:02:41.197882       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-991309_b740b017-e355-4f30-9689-7fc73a80f89b!
	I0617 12:02:41.198276       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c19e179d-dfa7-4034-ad1a-2148d11b33bc", APIVersion:"v1", ResourceVersion:"573", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-991309_b740b017-e355-4f30-9689-7fc73a80f89b became leader
	I0617 12:02:41.301257       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-991309_b740b017-e355-4f30-9689-7fc73a80f89b!
	
	
	==> storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] <==
	I0617 12:02:23.045302       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0617 12:02:23.047867       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-991309 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-n2svp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-991309 describe pod metrics-server-569cc877fc-n2svp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-991309 describe pod metrics-server-569cc877fc-n2svp: exit status 1 (83.080934ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-n2svp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-991309 describe pod metrics-server-569cc877fc-n2svp: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (322.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-152830 -n no-preload-152830
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-06-17 12:22:30.369984877 +0000 UTC m=+5896.213543026
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-152830 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-152830 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.069µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-152830 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152830 -n no-preload-152830
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-152830 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-152830 logs -n 25: (1.37757403s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-152830             | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-152830                                   | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-136195            | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:55 UTC |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	| delete  | -p                                                     | disable-driver-mounts-960277 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | disable-driver-mounts-960277                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:56 UTC |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-152830                  | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-152830                                   | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC | 17 Jun 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-136195                 | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-003661        | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC | 17 Jun 24 12:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-991309  | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:57 UTC | 17 Jun 24 11:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:57 UTC |                     |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-003661                              | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC | 17 Jun 24 11:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-003661             | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC | 17 Jun 24 11:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-003661                              | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-991309       | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:59 UTC | 17 Jun 24 12:06 UTC |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-003661                              | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 12:22 UTC | 17 Jun 24 12:22 UTC |
	| start   | -p newest-cni-335949 --memory=2200 --alsologtostderr   | newest-cni-335949            | jenkins | v1.33.1 | 17 Jun 24 12:22 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 12:22:03
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 12:22:03.461295  172544 out.go:291] Setting OutFile to fd 1 ...
	I0617 12:22:03.461556  172544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:22:03.461565  172544 out.go:304] Setting ErrFile to fd 2...
	I0617 12:22:03.461569  172544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:22:03.461752  172544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 12:22:03.462351  172544 out.go:298] Setting JSON to false
	I0617 12:22:03.463306  172544 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7470,"bootTime":1718619453,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 12:22:03.463369  172544 start.go:139] virtualization: kvm guest
	I0617 12:22:03.465788  172544 out.go:177] * [newest-cni-335949] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 12:22:03.467155  172544 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 12:22:03.467125  172544 notify.go:220] Checking for updates...
	I0617 12:22:03.468629  172544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 12:22:03.469923  172544 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:22:03.471122  172544 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 12:22:03.472301  172544 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 12:22:03.473543  172544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 12:22:03.475236  172544 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:22:03.475332  172544 config.go:182] Loaded profile config "embed-certs-136195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:22:03.475437  172544 config.go:182] Loaded profile config "no-preload-152830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:22:03.475566  172544 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 12:22:03.512358  172544 out.go:177] * Using the kvm2 driver based on user configuration
	I0617 12:22:03.513504  172544 start.go:297] selected driver: kvm2
	I0617 12:22:03.513522  172544 start.go:901] validating driver "kvm2" against <nil>
	I0617 12:22:03.513534  172544 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 12:22:03.514388  172544 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 12:22:03.514478  172544 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 12:22:03.530048  172544 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 12:22:03.530103  172544 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0617 12:22:03.530134  172544 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0617 12:22:03.530351  172544 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0617 12:22:03.530417  172544 cni.go:84] Creating CNI manager for ""
	I0617 12:22:03.530429  172544 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:22:03.530441  172544 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 12:22:03.530487  172544 start.go:340] cluster config:
	{Name:newest-cni-335949 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-335949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:22:03.530583  172544 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 12:22:03.532542  172544 out.go:177] * Starting "newest-cni-335949" primary control-plane node in "newest-cni-335949" cluster
	I0617 12:22:03.534053  172544 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:22:03.534094  172544 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 12:22:03.534113  172544 cache.go:56] Caching tarball of preloaded images
	I0617 12:22:03.534197  172544 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 12:22:03.534208  172544 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 12:22:03.534304  172544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/config.json ...
	I0617 12:22:03.534321  172544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/newest-cni-335949/config.json: {Name:mk12571d69ccb49112f5326eaec1e7b2c5f37087 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:22:03.534442  172544 start.go:360] acquireMachinesLock for newest-cni-335949: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 12:22:03.534468  172544 start.go:364] duration metric: took 14.032µs to acquireMachinesLock for "newest-cni-335949"
	I0617 12:22:03.534481  172544 start.go:93] Provisioning new machine with config: &{Name:newest-cni-335949 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:newest-cni-335949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:22:03.534547  172544 start.go:125] createHost starting for "" (driver="kvm2")
	I0617 12:22:03.536923  172544 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 12:22:03.537075  172544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:22:03.537116  172544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:22:03.553740  172544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35535
	I0617 12:22:03.554260  172544 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:22:03.554850  172544 main.go:141] libmachine: Using API Version  1
	I0617 12:22:03.554873  172544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:22:03.555201  172544 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:22:03.555409  172544 main.go:141] libmachine: (newest-cni-335949) Calling .GetMachineName
	I0617 12:22:03.555582  172544 main.go:141] libmachine: (newest-cni-335949) Calling .DriverName
	I0617 12:22:03.555755  172544 start.go:159] libmachine.API.Create for "newest-cni-335949" (driver="kvm2")
	I0617 12:22:03.555796  172544 client.go:168] LocalClient.Create starting
	I0617 12:22:03.555826  172544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem
	I0617 12:22:03.555865  172544 main.go:141] libmachine: Decoding PEM data...
	I0617 12:22:03.555881  172544 main.go:141] libmachine: Parsing certificate...
	I0617 12:22:03.555942  172544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem
	I0617 12:22:03.555959  172544 main.go:141] libmachine: Decoding PEM data...
	I0617 12:22:03.555970  172544 main.go:141] libmachine: Parsing certificate...
	I0617 12:22:03.555993  172544 main.go:141] libmachine: Running pre-create checks...
	I0617 12:22:03.556001  172544 main.go:141] libmachine: (newest-cni-335949) Calling .PreCreateCheck
	I0617 12:22:03.556341  172544 main.go:141] libmachine: (newest-cni-335949) Calling .GetConfigRaw
	I0617 12:22:03.556767  172544 main.go:141] libmachine: Creating machine...
	I0617 12:22:03.556781  172544 main.go:141] libmachine: (newest-cni-335949) Calling .Create
	I0617 12:22:03.556912  172544 main.go:141] libmachine: (newest-cni-335949) Creating KVM machine...
	I0617 12:22:03.558158  172544 main.go:141] libmachine: (newest-cni-335949) DBG | found existing default KVM network
	I0617 12:22:03.559756  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:03.559587  172567 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:1d:c3:3c} reservation:<nil>}
	I0617 12:22:03.561003  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:03.560912  172567 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:2d:d3:23} reservation:<nil>}
	I0617 12:22:03.562285  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:03.562209  172567 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030cb20}
	I0617 12:22:03.562309  172544 main.go:141] libmachine: (newest-cni-335949) DBG | created network xml: 
	I0617 12:22:03.562317  172544 main.go:141] libmachine: (newest-cni-335949) DBG | <network>
	I0617 12:22:03.562322  172544 main.go:141] libmachine: (newest-cni-335949) DBG |   <name>mk-newest-cni-335949</name>
	I0617 12:22:03.562327  172544 main.go:141] libmachine: (newest-cni-335949) DBG |   <dns enable='no'/>
	I0617 12:22:03.562335  172544 main.go:141] libmachine: (newest-cni-335949) DBG |   
	I0617 12:22:03.562342  172544 main.go:141] libmachine: (newest-cni-335949) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0617 12:22:03.562350  172544 main.go:141] libmachine: (newest-cni-335949) DBG |     <dhcp>
	I0617 12:22:03.562360  172544 main.go:141] libmachine: (newest-cni-335949) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0617 12:22:03.562368  172544 main.go:141] libmachine: (newest-cni-335949) DBG |     </dhcp>
	I0617 12:22:03.562401  172544 main.go:141] libmachine: (newest-cni-335949) DBG |   </ip>
	I0617 12:22:03.562426  172544 main.go:141] libmachine: (newest-cni-335949) DBG |   
	I0617 12:22:03.562519  172544 main.go:141] libmachine: (newest-cni-335949) DBG | </network>
	I0617 12:22:03.562555  172544 main.go:141] libmachine: (newest-cni-335949) DBG | 
	I0617 12:22:03.568291  172544 main.go:141] libmachine: (newest-cni-335949) DBG | trying to create private KVM network mk-newest-cni-335949 192.168.61.0/24...
	I0617 12:22:03.640430  172544 main.go:141] libmachine: (newest-cni-335949) DBG | private KVM network mk-newest-cni-335949 192.168.61.0/24 created
	I0617 12:22:03.640463  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:03.640397  172567 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 12:22:03.640476  172544 main.go:141] libmachine: (newest-cni-335949) Setting up store path in /home/jenkins/minikube-integration/19084-112967/.minikube/machines/newest-cni-335949 ...
	I0617 12:22:03.640494  172544 main.go:141] libmachine: (newest-cni-335949) Building disk image from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso
	I0617 12:22:03.640539  172544 main.go:141] libmachine: (newest-cni-335949) Downloading /home/jenkins/minikube-integration/19084-112967/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0617 12:22:03.916452  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:03.916324  172567 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/newest-cni-335949/id_rsa...
	I0617 12:22:03.996014  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:03.995892  172567 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/newest-cni-335949/newest-cni-335949.rawdisk...
	I0617 12:22:03.996049  172544 main.go:141] libmachine: (newest-cni-335949) DBG | Writing magic tar header
	I0617 12:22:03.996064  172544 main.go:141] libmachine: (newest-cni-335949) DBG | Writing SSH key tar header
	I0617 12:22:03.996092  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:03.996037  172567 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/newest-cni-335949 ...
	I0617 12:22:03.996148  172544 main.go:141] libmachine: (newest-cni-335949) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/newest-cni-335949
	I0617 12:22:03.996190  172544 main.go:141] libmachine: (newest-cni-335949) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube/machines
	I0617 12:22:03.996220  172544 main.go:141] libmachine: (newest-cni-335949) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 12:22:03.996231  172544 main.go:141] libmachine: (newest-cni-335949) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines/newest-cni-335949 (perms=drwx------)
	I0617 12:22:03.996248  172544 main.go:141] libmachine: (newest-cni-335949) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube/machines (perms=drwxr-xr-x)
	I0617 12:22:03.996268  172544 main.go:141] libmachine: (newest-cni-335949) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967/.minikube (perms=drwxr-xr-x)
	I0617 12:22:03.996281  172544 main.go:141] libmachine: (newest-cni-335949) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19084-112967
	I0617 12:22:03.996291  172544 main.go:141] libmachine: (newest-cni-335949) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0617 12:22:03.996301  172544 main.go:141] libmachine: (newest-cni-335949) DBG | Checking permissions on dir: /home/jenkins
	I0617 12:22:03.996307  172544 main.go:141] libmachine: (newest-cni-335949) DBG | Checking permissions on dir: /home
	I0617 12:22:03.996315  172544 main.go:141] libmachine: (newest-cni-335949) DBG | Skipping /home - not owner
	I0617 12:22:03.996330  172544 main.go:141] libmachine: (newest-cni-335949) Setting executable bit set on /home/jenkins/minikube-integration/19084-112967 (perms=drwxrwxr-x)
	I0617 12:22:03.996344  172544 main.go:141] libmachine: (newest-cni-335949) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0617 12:22:03.996365  172544 main.go:141] libmachine: (newest-cni-335949) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0617 12:22:03.996382  172544 main.go:141] libmachine: (newest-cni-335949) Creating domain...
	I0617 12:22:03.997569  172544 main.go:141] libmachine: (newest-cni-335949) define libvirt domain using xml: 
	I0617 12:22:03.997593  172544 main.go:141] libmachine: (newest-cni-335949) <domain type='kvm'>
	I0617 12:22:03.997603  172544 main.go:141] libmachine: (newest-cni-335949)   <name>newest-cni-335949</name>
	I0617 12:22:03.997619  172544 main.go:141] libmachine: (newest-cni-335949)   <memory unit='MiB'>2200</memory>
	I0617 12:22:03.997652  172544 main.go:141] libmachine: (newest-cni-335949)   <vcpu>2</vcpu>
	I0617 12:22:03.997679  172544 main.go:141] libmachine: (newest-cni-335949)   <features>
	I0617 12:22:03.997692  172544 main.go:141] libmachine: (newest-cni-335949)     <acpi/>
	I0617 12:22:03.997708  172544 main.go:141] libmachine: (newest-cni-335949)     <apic/>
	I0617 12:22:03.997719  172544 main.go:141] libmachine: (newest-cni-335949)     <pae/>
	I0617 12:22:03.997728  172544 main.go:141] libmachine: (newest-cni-335949)     
	I0617 12:22:03.997737  172544 main.go:141] libmachine: (newest-cni-335949)   </features>
	I0617 12:22:03.997744  172544 main.go:141] libmachine: (newest-cni-335949)   <cpu mode='host-passthrough'>
	I0617 12:22:03.997756  172544 main.go:141] libmachine: (newest-cni-335949)   
	I0617 12:22:03.997766  172544 main.go:141] libmachine: (newest-cni-335949)   </cpu>
	I0617 12:22:03.997774  172544 main.go:141] libmachine: (newest-cni-335949)   <os>
	I0617 12:22:03.997785  172544 main.go:141] libmachine: (newest-cni-335949)     <type>hvm</type>
	I0617 12:22:03.997797  172544 main.go:141] libmachine: (newest-cni-335949)     <boot dev='cdrom'/>
	I0617 12:22:03.997803  172544 main.go:141] libmachine: (newest-cni-335949)     <boot dev='hd'/>
	I0617 12:22:03.997814  172544 main.go:141] libmachine: (newest-cni-335949)     <bootmenu enable='no'/>
	I0617 12:22:03.997822  172544 main.go:141] libmachine: (newest-cni-335949)   </os>
	I0617 12:22:03.997834  172544 main.go:141] libmachine: (newest-cni-335949)   <devices>
	I0617 12:22:03.997845  172544 main.go:141] libmachine: (newest-cni-335949)     <disk type='file' device='cdrom'>
	I0617 12:22:03.997861  172544 main.go:141] libmachine: (newest-cni-335949)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/newest-cni-335949/boot2docker.iso'/>
	I0617 12:22:03.997884  172544 main.go:141] libmachine: (newest-cni-335949)       <target dev='hdc' bus='scsi'/>
	I0617 12:22:03.997906  172544 main.go:141] libmachine: (newest-cni-335949)       <readonly/>
	I0617 12:22:03.997919  172544 main.go:141] libmachine: (newest-cni-335949)     </disk>
	I0617 12:22:03.997930  172544 main.go:141] libmachine: (newest-cni-335949)     <disk type='file' device='disk'>
	I0617 12:22:03.997943  172544 main.go:141] libmachine: (newest-cni-335949)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0617 12:22:03.997964  172544 main.go:141] libmachine: (newest-cni-335949)       <source file='/home/jenkins/minikube-integration/19084-112967/.minikube/machines/newest-cni-335949/newest-cni-335949.rawdisk'/>
	I0617 12:22:03.997982  172544 main.go:141] libmachine: (newest-cni-335949)       <target dev='hda' bus='virtio'/>
	I0617 12:22:03.998019  172544 main.go:141] libmachine: (newest-cni-335949)     </disk>
	I0617 12:22:03.998033  172544 main.go:141] libmachine: (newest-cni-335949)     <interface type='network'>
	I0617 12:22:03.998048  172544 main.go:141] libmachine: (newest-cni-335949)       <source network='mk-newest-cni-335949'/>
	I0617 12:22:03.998057  172544 main.go:141] libmachine: (newest-cni-335949)       <model type='virtio'/>
	I0617 12:22:03.998071  172544 main.go:141] libmachine: (newest-cni-335949)     </interface>
	I0617 12:22:03.998084  172544 main.go:141] libmachine: (newest-cni-335949)     <interface type='network'>
	I0617 12:22:03.998098  172544 main.go:141] libmachine: (newest-cni-335949)       <source network='default'/>
	I0617 12:22:03.998109  172544 main.go:141] libmachine: (newest-cni-335949)       <model type='virtio'/>
	I0617 12:22:03.998122  172544 main.go:141] libmachine: (newest-cni-335949)     </interface>
	I0617 12:22:03.998137  172544 main.go:141] libmachine: (newest-cni-335949)     <serial type='pty'>
	I0617 12:22:03.998148  172544 main.go:141] libmachine: (newest-cni-335949)       <target port='0'/>
	I0617 12:22:03.998157  172544 main.go:141] libmachine: (newest-cni-335949)     </serial>
	I0617 12:22:03.998168  172544 main.go:141] libmachine: (newest-cni-335949)     <console type='pty'>
	I0617 12:22:03.998180  172544 main.go:141] libmachine: (newest-cni-335949)       <target type='serial' port='0'/>
	I0617 12:22:03.998193  172544 main.go:141] libmachine: (newest-cni-335949)     </console>
	I0617 12:22:03.998204  172544 main.go:141] libmachine: (newest-cni-335949)     <rng model='virtio'>
	I0617 12:22:03.998217  172544 main.go:141] libmachine: (newest-cni-335949)       <backend model='random'>/dev/random</backend>
	I0617 12:22:03.998238  172544 main.go:141] libmachine: (newest-cni-335949)     </rng>
	I0617 12:22:03.998251  172544 main.go:141] libmachine: (newest-cni-335949)     
	I0617 12:22:03.998260  172544 main.go:141] libmachine: (newest-cni-335949)     
	I0617 12:22:03.998268  172544 main.go:141] libmachine: (newest-cni-335949)   </devices>
	I0617 12:22:03.998277  172544 main.go:141] libmachine: (newest-cni-335949) </domain>
	I0617 12:22:03.998287  172544 main.go:141] libmachine: (newest-cni-335949) 
	I0617 12:22:04.005913  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined MAC address 52:54:00:69:ec:f6 in network default
	I0617 12:22:04.006555  172544 main.go:141] libmachine: (newest-cni-335949) Ensuring networks are active...
	I0617 12:22:04.006581  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:04.007273  172544 main.go:141] libmachine: (newest-cni-335949) Ensuring network default is active
	I0617 12:22:04.007636  172544 main.go:141] libmachine: (newest-cni-335949) Ensuring network mk-newest-cni-335949 is active
	I0617 12:22:04.008230  172544 main.go:141] libmachine: (newest-cni-335949) Getting domain xml...
	I0617 12:22:04.009143  172544 main.go:141] libmachine: (newest-cni-335949) Creating domain...
	I0617 12:22:05.246183  172544 main.go:141] libmachine: (newest-cni-335949) Waiting to get IP...
	I0617 12:22:05.246940  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:05.247480  172544 main.go:141] libmachine: (newest-cni-335949) DBG | unable to find current IP address of domain newest-cni-335949 in network mk-newest-cni-335949
	I0617 12:22:05.247565  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:05.247482  172567 retry.go:31] will retry after 252.383399ms: waiting for machine to come up
	I0617 12:22:05.502178  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:05.502721  172544 main.go:141] libmachine: (newest-cni-335949) DBG | unable to find current IP address of domain newest-cni-335949 in network mk-newest-cni-335949
	I0617 12:22:05.502754  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:05.502659  172567 retry.go:31] will retry after 285.934266ms: waiting for machine to come up
	I0617 12:22:05.790320  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:05.790809  172544 main.go:141] libmachine: (newest-cni-335949) DBG | unable to find current IP address of domain newest-cni-335949 in network mk-newest-cni-335949
	I0617 12:22:05.790866  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:05.790766  172567 retry.go:31] will retry after 421.103827ms: waiting for machine to come up
	I0617 12:22:06.213532  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:06.214066  172544 main.go:141] libmachine: (newest-cni-335949) DBG | unable to find current IP address of domain newest-cni-335949 in network mk-newest-cni-335949
	I0617 12:22:06.214095  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:06.214006  172567 retry.go:31] will retry after 560.26622ms: waiting for machine to come up
	I0617 12:22:06.775863  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:06.776390  172544 main.go:141] libmachine: (newest-cni-335949) DBG | unable to find current IP address of domain newest-cni-335949 in network mk-newest-cni-335949
	I0617 12:22:06.776442  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:06.776319  172567 retry.go:31] will retry after 549.583291ms: waiting for machine to come up
	I0617 12:22:07.326949  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:07.327519  172544 main.go:141] libmachine: (newest-cni-335949) DBG | unable to find current IP address of domain newest-cni-335949 in network mk-newest-cni-335949
	I0617 12:22:07.327550  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:07.327476  172567 retry.go:31] will retry after 799.392354ms: waiting for machine to come up
	I0617 12:22:08.128497  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:08.129013  172544 main.go:141] libmachine: (newest-cni-335949) DBG | unable to find current IP address of domain newest-cni-335949 in network mk-newest-cni-335949
	I0617 12:22:08.129041  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:08.128958  172567 retry.go:31] will retry after 855.32874ms: waiting for machine to come up
	I0617 12:22:08.985459  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:08.985989  172544 main.go:141] libmachine: (newest-cni-335949) DBG | unable to find current IP address of domain newest-cni-335949 in network mk-newest-cni-335949
	I0617 12:22:08.986021  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:08.985933  172567 retry.go:31] will retry after 1.278714961s: waiting for machine to come up
	I0617 12:22:10.266154  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:10.266655  172544 main.go:141] libmachine: (newest-cni-335949) DBG | unable to find current IP address of domain newest-cni-335949 in network mk-newest-cni-335949
	I0617 12:22:10.266685  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:10.266597  172567 retry.go:31] will retry after 1.366744227s: waiting for machine to come up
	I0617 12:22:11.635160  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:11.635569  172544 main.go:141] libmachine: (newest-cni-335949) DBG | unable to find current IP address of domain newest-cni-335949 in network mk-newest-cni-335949
	I0617 12:22:11.635604  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:11.635531  172567 retry.go:31] will retry after 2.249242135s: waiting for machine to come up
	I0617 12:22:13.887096  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:13.887546  172544 main.go:141] libmachine: (newest-cni-335949) DBG | unable to find current IP address of domain newest-cni-335949 in network mk-newest-cni-335949
	I0617 12:22:13.887580  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:13.887480  172567 retry.go:31] will retry after 2.51260428s: waiting for machine to come up
	I0617 12:22:16.401261  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:16.401787  172544 main.go:141] libmachine: (newest-cni-335949) DBG | unable to find current IP address of domain newest-cni-335949 in network mk-newest-cni-335949
	I0617 12:22:16.401814  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:16.401724  172567 retry.go:31] will retry after 3.476169979s: waiting for machine to come up
	I0617 12:22:19.879539  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:19.880104  172544 main.go:141] libmachine: (newest-cni-335949) DBG | unable to find current IP address of domain newest-cni-335949 in network mk-newest-cni-335949
	I0617 12:22:19.880140  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:19.880049  172567 retry.go:31] will retry after 3.273502601s: waiting for machine to come up
	I0617 12:22:23.156826  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:23.157291  172544 main.go:141] libmachine: (newest-cni-335949) DBG | unable to find current IP address of domain newest-cni-335949 in network mk-newest-cni-335949
	I0617 12:22:23.157325  172544 main.go:141] libmachine: (newest-cni-335949) DBG | I0617 12:22:23.157247  172567 retry.go:31] will retry after 5.104990205s: waiting for machine to come up
	I0617 12:22:28.264571  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:28.265148  172544 main.go:141] libmachine: (newest-cni-335949) Found IP for machine: 192.168.61.120
	I0617 12:22:28.265174  172544 main.go:141] libmachine: (newest-cni-335949) Reserving static IP address...
	I0617 12:22:28.265204  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has current primary IP address 192.168.61.120 and MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:28.265504  172544 main.go:141] libmachine: (newest-cni-335949) DBG | unable to find host DHCP lease matching {name: "newest-cni-335949", mac: "52:54:00:d4:e4:74", ip: "192.168.61.120"} in network mk-newest-cni-335949
	I0617 12:22:28.342195  172544 main.go:141] libmachine: (newest-cni-335949) DBG | Getting to WaitForSSH function...
	I0617 12:22:28.342226  172544 main.go:141] libmachine: (newest-cni-335949) Reserved static IP address: 192.168.61.120
	I0617 12:22:28.342239  172544 main.go:141] libmachine: (newest-cni-335949) Waiting for SSH to be available...
	I0617 12:22:28.344877  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:28.345428  172544 main.go:141] libmachine: (newest-cni-335949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:e4:74", ip: ""} in network mk-newest-cni-335949: {Iface:virbr3 ExpiryTime:2024-06-17 13:22:18 +0000 UTC Type:0 Mac:52:54:00:d4:e4:74 Iaid: IPaddr:192.168.61.120 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d4:e4:74}
	I0617 12:22:28.345467  172544 main.go:141] libmachine: (newest-cni-335949) DBG | domain newest-cni-335949 has defined IP address 192.168.61.120 and MAC address 52:54:00:d4:e4:74 in network mk-newest-cni-335949
	I0617 12:22:28.345579  172544 main.go:141] libmachine: (newest-cni-335949) DBG | Using SSH client type: external
	I0617 12:22:28.345652  172544 main.go:141] libmachine: (newest-cni-335949) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/newest-cni-335949/id_rsa (-rw-------)
	I0617 12:22:28.345693  172544 main.go:141] libmachine: (newest-cni-335949) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.120 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/newest-cni-335949/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:22:28.345709  172544 main.go:141] libmachine: (newest-cni-335949) DBG | About to run SSH command:
	I0617 12:22:28.345722  172544 main.go:141] libmachine: (newest-cni-335949) DBG | exit 0
	
	
	==> CRI-O <==
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.049099094Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626951049010735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aabebf29-6ab1-4756-996f-0800397d1a9c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.050805478Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ea3d696-e4aa-4178-b9ec-b8e437723d9c name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.050880705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ea3d696-e4aa-4178-b9ec-b8e437723d9c name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.051140541Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2eb6e871393848fac8fd1b5630ae133dfbd8784261c95335263a2a2e9aeb31ed,PodSandboxId:0e2620a06aafec68ec8cbf6b343abffa70fb9085f375b6885f133113b68cec97,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718626082573824689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gjt84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979c7339-3a4c-4bc8-8586-4d9da42339ae,},Annotations:map[string]string{io.kubernetes.container.hash: 17100608,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09220cf548ec25e3fa38ba0ac745612184325366eca32d4f17de4c2baa2094ee,PodSandboxId:5d8599f2018c313440ec042d0d1ddc63aa32ba7c86cfee77089ff66c713cd16e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718626082513972888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vz7dg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 53c5188e-bc44-4aed-a989-ef3e2379c27b,},Annotations:map[string]string{io.kubernetes.container.hash: ec0598bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bded990e0ce1c6be7f1b1465276f4a8754154adf288c943ec48740d65f95d32,PodSandboxId:701951a57908ba8b3906dfde5778973e38e10282bc7f0d512c66261129dc2ee4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1718626081861791047,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6cc7cdc-43f4-40c4-a202-5674fcdcedd0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fca2510,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d420ac4be70e18bcc188db3f69ef03797656c819429b0bc4fa68a2cf25cba17,PodSandboxId:c97bc08c0fbb373a7790949cab16859861f29efba5664702ae197c1fd54eeed3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718626081807370163,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6c4hm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9830236-af96-437f-ad07-494b25f1a90e,},Annotations:map[string]string{io.kubernetes.container.hash: 15e64a6c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b82613491050410755d245f7ea0fd61cc70f9f438300c01e6a12f663ad434eee,PodSandboxId:75a277d3438b2bc2eda6aceeb51ff775534afbfc7373455d72f0a6c72d12a581,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718626061439689334,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ee54b88856008800f3a5a411b09cf4,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5833a84b69a3ed88b016a93eab2b3859871cb27f7331ae2296a7db6fd65e96f7,PodSandboxId:887af8887922b1719e31d347995aef73bdf1e04b1fbf76b5face2c4b630c5bed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718626061432583941,Label
s:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7355a3a6d39f3ad62baaaf745eac603,},Annotations:map[string]string{io.kubernetes.container.hash: 99151ff0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf31b741f07971feda2bdee30e1b474c535befaa7310f7e6f31405b62526b2af,PodSandboxId:85d99e7a3ceeb18acc168cc80fcda42788569272ad9d1c7209bbad3774ec5260,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718626061367595982,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dde261e0cfb643c1c4d3ca5c2bc383c1,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de4bddebe0087f3f022dfeafa27d6746d6447687007d3334d4251031b8f6aabc,PodSandboxId:a4cd7f7d3051c333c9710faa8ea0b62dd4aff09c8e24d86f314398d5f79c06c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718626061280190306,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baa2096079eb9eb9c1a91e2265966e2,},Annotations:map[string]string{io.kubernetes.container.hash: 507fdc08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ea3d696-e4aa-4178-b9ec-b8e437723d9c name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.113897458Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7455c4b8-e27a-4798-ab45-dbc92bf39892 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.114004910Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7455c4b8-e27a-4798-ab45-dbc92bf39892 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.116117916Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dce64344-b70d-4033-acae-985d38524ffc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.116700535Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626951116665135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dce64344-b70d-4033-acae-985d38524ffc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.117379255Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b98d502-400c-4511-b844-70413de3aa89 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.117460248Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b98d502-400c-4511-b844-70413de3aa89 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.117927359Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2eb6e871393848fac8fd1b5630ae133dfbd8784261c95335263a2a2e9aeb31ed,PodSandboxId:0e2620a06aafec68ec8cbf6b343abffa70fb9085f375b6885f133113b68cec97,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718626082573824689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gjt84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979c7339-3a4c-4bc8-8586-4d9da42339ae,},Annotations:map[string]string{io.kubernetes.container.hash: 17100608,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09220cf548ec25e3fa38ba0ac745612184325366eca32d4f17de4c2baa2094ee,PodSandboxId:5d8599f2018c313440ec042d0d1ddc63aa32ba7c86cfee77089ff66c713cd16e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718626082513972888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vz7dg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 53c5188e-bc44-4aed-a989-ef3e2379c27b,},Annotations:map[string]string{io.kubernetes.container.hash: ec0598bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bded990e0ce1c6be7f1b1465276f4a8754154adf288c943ec48740d65f95d32,PodSandboxId:701951a57908ba8b3906dfde5778973e38e10282bc7f0d512c66261129dc2ee4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1718626081861791047,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6cc7cdc-43f4-40c4-a202-5674fcdcedd0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fca2510,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d420ac4be70e18bcc188db3f69ef03797656c819429b0bc4fa68a2cf25cba17,PodSandboxId:c97bc08c0fbb373a7790949cab16859861f29efba5664702ae197c1fd54eeed3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718626081807370163,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6c4hm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9830236-af96-437f-ad07-494b25f1a90e,},Annotations:map[string]string{io.kubernetes.container.hash: 15e64a6c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b82613491050410755d245f7ea0fd61cc70f9f438300c01e6a12f663ad434eee,PodSandboxId:75a277d3438b2bc2eda6aceeb51ff775534afbfc7373455d72f0a6c72d12a581,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718626061439689334,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ee54b88856008800f3a5a411b09cf4,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5833a84b69a3ed88b016a93eab2b3859871cb27f7331ae2296a7db6fd65e96f7,PodSandboxId:887af8887922b1719e31d347995aef73bdf1e04b1fbf76b5face2c4b630c5bed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718626061432583941,Label
s:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7355a3a6d39f3ad62baaaf745eac603,},Annotations:map[string]string{io.kubernetes.container.hash: 99151ff0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf31b741f07971feda2bdee30e1b474c535befaa7310f7e6f31405b62526b2af,PodSandboxId:85d99e7a3ceeb18acc168cc80fcda42788569272ad9d1c7209bbad3774ec5260,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718626061367595982,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dde261e0cfb643c1c4d3ca5c2bc383c1,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de4bddebe0087f3f022dfeafa27d6746d6447687007d3334d4251031b8f6aabc,PodSandboxId:a4cd7f7d3051c333c9710faa8ea0b62dd4aff09c8e24d86f314398d5f79c06c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718626061280190306,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baa2096079eb9eb9c1a91e2265966e2,},Annotations:map[string]string{io.kubernetes.container.hash: 507fdc08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b98d502-400c-4511-b844-70413de3aa89 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.163675565Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=90e337cd-d41c-4bc5-9336-ce4a4429c4c0 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.163772289Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=90e337cd-d41c-4bc5-9336-ce4a4429c4c0 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.166221053Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0629278a-7b5b-4207-9156-ff759b02e2d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.166690550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626951166667638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0629278a-7b5b-4207-9156-ff759b02e2d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.170453127Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71a366a2-8698-4ac9-88ab-2c00673e3f04 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.170600139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71a366a2-8698-4ac9-88ab-2c00673e3f04 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.171159277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2eb6e871393848fac8fd1b5630ae133dfbd8784261c95335263a2a2e9aeb31ed,PodSandboxId:0e2620a06aafec68ec8cbf6b343abffa70fb9085f375b6885f133113b68cec97,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718626082573824689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gjt84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979c7339-3a4c-4bc8-8586-4d9da42339ae,},Annotations:map[string]string{io.kubernetes.container.hash: 17100608,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09220cf548ec25e3fa38ba0ac745612184325366eca32d4f17de4c2baa2094ee,PodSandboxId:5d8599f2018c313440ec042d0d1ddc63aa32ba7c86cfee77089ff66c713cd16e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718626082513972888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vz7dg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 53c5188e-bc44-4aed-a989-ef3e2379c27b,},Annotations:map[string]string{io.kubernetes.container.hash: ec0598bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bded990e0ce1c6be7f1b1465276f4a8754154adf288c943ec48740d65f95d32,PodSandboxId:701951a57908ba8b3906dfde5778973e38e10282bc7f0d512c66261129dc2ee4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1718626081861791047,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6cc7cdc-43f4-40c4-a202-5674fcdcedd0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fca2510,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d420ac4be70e18bcc188db3f69ef03797656c819429b0bc4fa68a2cf25cba17,PodSandboxId:c97bc08c0fbb373a7790949cab16859861f29efba5664702ae197c1fd54eeed3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718626081807370163,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6c4hm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9830236-af96-437f-ad07-494b25f1a90e,},Annotations:map[string]string{io.kubernetes.container.hash: 15e64a6c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b82613491050410755d245f7ea0fd61cc70f9f438300c01e6a12f663ad434eee,PodSandboxId:75a277d3438b2bc2eda6aceeb51ff775534afbfc7373455d72f0a6c72d12a581,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718626061439689334,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ee54b88856008800f3a5a411b09cf4,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5833a84b69a3ed88b016a93eab2b3859871cb27f7331ae2296a7db6fd65e96f7,PodSandboxId:887af8887922b1719e31d347995aef73bdf1e04b1fbf76b5face2c4b630c5bed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718626061432583941,Label
s:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7355a3a6d39f3ad62baaaf745eac603,},Annotations:map[string]string{io.kubernetes.container.hash: 99151ff0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf31b741f07971feda2bdee30e1b474c535befaa7310f7e6f31405b62526b2af,PodSandboxId:85d99e7a3ceeb18acc168cc80fcda42788569272ad9d1c7209bbad3774ec5260,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718626061367595982,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dde261e0cfb643c1c4d3ca5c2bc383c1,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de4bddebe0087f3f022dfeafa27d6746d6447687007d3334d4251031b8f6aabc,PodSandboxId:a4cd7f7d3051c333c9710faa8ea0b62dd4aff09c8e24d86f314398d5f79c06c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718626061280190306,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baa2096079eb9eb9c1a91e2265966e2,},Annotations:map[string]string{io.kubernetes.container.hash: 507fdc08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=71a366a2-8698-4ac9-88ab-2c00673e3f04 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.209885763Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f389ac73-9c87-4b95-923c-c02be9a1929b name=/runtime.v1.RuntimeService/Version
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.209966458Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f389ac73-9c87-4b95-923c-c02be9a1929b name=/runtime.v1.RuntimeService/Version
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.211361522Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7cba0fa-a270-4b65-a202-d9f57e0c906d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.211945659Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626951211921892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7cba0fa-a270-4b65-a202-d9f57e0c906d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.212331021Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46c9763a-9817-411b-a66d-cf196e101c97 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.212398432Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46c9763a-9817-411b-a66d-cf196e101c97 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:31 no-preload-152830 crio[737]: time="2024-06-17 12:22:31.212638163Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2eb6e871393848fac8fd1b5630ae133dfbd8784261c95335263a2a2e9aeb31ed,PodSandboxId:0e2620a06aafec68ec8cbf6b343abffa70fb9085f375b6885f133113b68cec97,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718626082573824689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gjt84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979c7339-3a4c-4bc8-8586-4d9da42339ae,},Annotations:map[string]string{io.kubernetes.container.hash: 17100608,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09220cf548ec25e3fa38ba0ac745612184325366eca32d4f17de4c2baa2094ee,PodSandboxId:5d8599f2018c313440ec042d0d1ddc63aa32ba7c86cfee77089ff66c713cd16e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718626082513972888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vz7dg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 53c5188e-bc44-4aed-a989-ef3e2379c27b,},Annotations:map[string]string{io.kubernetes.container.hash: ec0598bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bded990e0ce1c6be7f1b1465276f4a8754154adf288c943ec48740d65f95d32,PodSandboxId:701951a57908ba8b3906dfde5778973e38e10282bc7f0d512c66261129dc2ee4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1718626081861791047,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6cc7cdc-43f4-40c4-a202-5674fcdcedd0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fca2510,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d420ac4be70e18bcc188db3f69ef03797656c819429b0bc4fa68a2cf25cba17,PodSandboxId:c97bc08c0fbb373a7790949cab16859861f29efba5664702ae197c1fd54eeed3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718626081807370163,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6c4hm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9830236-af96-437f-ad07-494b25f1a90e,},Annotations:map[string]string{io.kubernetes.container.hash: 15e64a6c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b82613491050410755d245f7ea0fd61cc70f9f438300c01e6a12f663ad434eee,PodSandboxId:75a277d3438b2bc2eda6aceeb51ff775534afbfc7373455d72f0a6c72d12a581,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718626061439689334,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ee54b88856008800f3a5a411b09cf4,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5833a84b69a3ed88b016a93eab2b3859871cb27f7331ae2296a7db6fd65e96f7,PodSandboxId:887af8887922b1719e31d347995aef73bdf1e04b1fbf76b5face2c4b630c5bed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718626061432583941,Label
s:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7355a3a6d39f3ad62baaaf745eac603,},Annotations:map[string]string{io.kubernetes.container.hash: 99151ff0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf31b741f07971feda2bdee30e1b474c535befaa7310f7e6f31405b62526b2af,PodSandboxId:85d99e7a3ceeb18acc168cc80fcda42788569272ad9d1c7209bbad3774ec5260,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718626061367595982,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dde261e0cfb643c1c4d3ca5c2bc383c1,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de4bddebe0087f3f022dfeafa27d6746d6447687007d3334d4251031b8f6aabc,PodSandboxId:a4cd7f7d3051c333c9710faa8ea0b62dd4aff09c8e24d86f314398d5f79c06c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718626061280190306,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-152830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baa2096079eb9eb9c1a91e2265966e2,},Annotations:map[string]string{io.kubernetes.container.hash: 507fdc08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46c9763a-9817-411b-a66d-cf196e101c97 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2eb6e87139384       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   0e2620a06aafe       coredns-7db6d8ff4d-gjt84
	09220cf548ec2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   5d8599f2018c3       coredns-7db6d8ff4d-vz7dg
	9bded990e0ce1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   701951a57908b       storage-provisioner
	4d420ac4be70e       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   14 minutes ago      Running             kube-proxy                0                   c97bc08c0fbb3       kube-proxy-6c4hm
	b826134910504       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   14 minutes ago      Running             kube-controller-manager   2                   75a277d3438b2       kube-controller-manager-no-preload-152830
	5833a84b69a3e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   14 minutes ago      Running             etcd                      2                   887af8887922b       etcd-no-preload-152830
	bf31b741f0797       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   14 minutes ago      Running             kube-scheduler            2                   85d99e7a3ceeb       kube-scheduler-no-preload-152830
	de4bddebe0087       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   14 minutes ago      Running             kube-apiserver            2                   a4cd7f7d3051c       kube-apiserver-no-preload-152830
	
	
	==> coredns [09220cf548ec25e3fa38ba0ac745612184325366eca32d4f17de4c2baa2094ee] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [2eb6e871393848fac8fd1b5630ae133dfbd8784261c95335263a2a2e9aeb31ed] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-152830
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-152830
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=no-preload-152830
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T12_07_47_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 12:07:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-152830
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 12:22:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 12:18:18 +0000   Mon, 17 Jun 2024 12:07:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 12:18:18 +0000   Mon, 17 Jun 2024 12:07:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 12:18:18 +0000   Mon, 17 Jun 2024 12:07:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 12:18:18 +0000   Mon, 17 Jun 2024 12:07:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.173
	  Hostname:    no-preload-152830
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d73a39d81ccb4dd998aa6fdf08c4cb97
	  System UUID:                d73a39d8-1ccb-4dd9-98aa-6fdf08c4cb97
	  Boot ID:                    6c7e6252-8e65-4558-aaad-d3923e6b9c9c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-gjt84                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-vz7dg                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-152830                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-152830             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-152830    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-6c4hm                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-152830             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-569cc877fc-zllzk              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-152830 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-152830 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-152830 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-152830 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-152830 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-152830 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-152830 event: Registered Node no-preload-152830 in Controller
	
	
	==> dmesg <==
	[  +0.052783] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044949] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.828945] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.586167] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.669274] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.355585] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.060627] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070294] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.174710] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.133184] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +0.291940] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[ +16.459418] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	[  +0.065629] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.831156] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +4.644779] kauditd_printk_skb: 100 callbacks suppressed
	[Jun17 12:03] kauditd_printk_skb: 89 callbacks suppressed
	[Jun17 12:07] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.150396] systemd-fstab-generator[4053]: Ignoring "noauto" option for root device
	[  +4.647290] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.412647] systemd-fstab-generator[4375]: Ignoring "noauto" option for root device
	[ +13.980166] systemd-fstab-generator[4575]: Ignoring "noauto" option for root device
	[  +0.113914] kauditd_printk_skb: 14 callbacks suppressed
	[Jun17 12:09] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [5833a84b69a3ed88b016a93eab2b3859871cb27f7331ae2296a7db6fd65e96f7] <==
	{"level":"info","ts":"2024-06-17T12:07:41.750936Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"db356cbc19811e0e","initial-advertise-peer-urls":["https://192.168.39.173:2380"],"listen-peer-urls":["https://192.168.39.173:2380"],"advertise-client-urls":["https://192.168.39.173:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.173:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-17T12:07:41.755742Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-17T12:07:41.75803Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.173:2380"}
	{"level":"info","ts":"2024-06-17T12:07:41.765954Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.173:2380"}
	{"level":"info","ts":"2024-06-17T12:07:41.898605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-17T12:07:41.89875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-17T12:07:41.898799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e received MsgPreVoteResp from db356cbc19811e0e at term 1"}
	{"level":"info","ts":"2024-06-17T12:07:41.89883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e became candidate at term 2"}
	{"level":"info","ts":"2024-06-17T12:07:41.898855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e received MsgVoteResp from db356cbc19811e0e at term 2"}
	{"level":"info","ts":"2024-06-17T12:07:41.898882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e became leader at term 2"}
	{"level":"info","ts":"2024-06-17T12:07:41.898907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: db356cbc19811e0e elected leader db356cbc19811e0e at term 2"}
	{"level":"info","ts":"2024-06-17T12:07:41.903792Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"db356cbc19811e0e","local-member-attributes":"{Name:no-preload-152830 ClientURLs:[https://192.168.39.173:2379]}","request-path":"/0/members/db356cbc19811e0e/attributes","cluster-id":"a25ac6d8ed10a2a9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-17T12:07:41.903878Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T12:07:41.904275Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T12:07:41.904592Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T12:07:41.911906Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a25ac6d8ed10a2a9","local-member-id":"db356cbc19811e0e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T12:07:41.919133Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T12:07:41.920231Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T12:07:41.912149Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.173:2379"}
	{"level":"info","ts":"2024-06-17T12:07:41.933574Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-17T12:07:41.947408Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-17T12:07:41.950568Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-17T12:17:42.172082Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":715}
	{"level":"info","ts":"2024-06-17T12:17:42.184585Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":715,"took":"11.81294ms","hash":1062064089,"current-db-size-bytes":2195456,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2195456,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-06-17T12:17:42.184652Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1062064089,"revision":715,"compact-revision":-1}
	
	
	==> kernel <==
	 12:22:31 up 20 min,  0 users,  load average: 0.24, 0.20, 0.18
	Linux no-preload-152830 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [de4bddebe0087f3f022dfeafa27d6746d6447687007d3334d4251031b8f6aabc] <==
	I0617 12:15:44.693903       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:17:43.700140       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:17:43.700259       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0617 12:17:44.701721       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:17:44.701971       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:17:44.702011       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:17:44.702483       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:17:44.702627       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0617 12:17:44.703592       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:18:44.702356       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:18:44.702665       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:18:44.702700       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:18:44.703909       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:18:44.703937       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0617 12:18:44.703945       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0617 12:20:44.703917       1 handler_proxy.go:93] no RequestInfo found in the context
	W0617 12:20:44.704039       1 handler_proxy.go:93] no RequestInfo found in the context
	E0617 12:20:44.704064       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0617 12:20:44.704076       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0617 12:20:44.704146       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:20:44.705113       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b82613491050410755d245f7ea0fd61cc70f9f438300c01e6a12f663ad434eee] <==
	I0617 12:17:00.700770       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:17:30.217715       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:17:30.709473       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:18:00.223979       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:18:00.729674       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:18:30.229170       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:18:30.737589       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:19:00.234878       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:19:00.747016       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0617 12:19:02.542690       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="229.294µs"
	I0617 12:19:17.538089       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="120.076µs"
	E0617 12:19:30.240279       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:19:30.755493       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:20:00.246361       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:20:00.772291       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:20:30.251903       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:20:30.779932       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:21:00.256777       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:21:00.787022       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:21:30.263237       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:21:30.795152       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:22:00.269965       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:22:00.818808       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0617 12:22:30.275255       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0617 12:22:30.829153       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4d420ac4be70e18bcc188db3f69ef03797656c819429b0bc4fa68a2cf25cba17] <==
	I0617 12:08:02.089979       1 server_linux.go:69] "Using iptables proxy"
	I0617 12:08:02.124894       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.173"]
	I0617 12:08:02.239659       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0617 12:08:02.239709       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0617 12:08:02.239725       1 server_linux.go:165] "Using iptables Proxier"
	I0617 12:08:02.246452       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 12:08:02.246951       1 server.go:872] "Version info" version="v1.30.1"
	I0617 12:08:02.246968       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 12:08:02.248449       1 config.go:192] "Starting service config controller"
	I0617 12:08:02.248470       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 12:08:02.248589       1 config.go:101] "Starting endpoint slice config controller"
	I0617 12:08:02.248601       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0617 12:08:02.249321       1 config.go:319] "Starting node config controller"
	I0617 12:08:02.249327       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 12:08:02.349335       1 shared_informer.go:320] Caches are synced for service config
	I0617 12:08:02.349433       1 shared_informer.go:320] Caches are synced for node config
	I0617 12:08:02.349458       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [bf31b741f07971feda2bdee30e1b474c535befaa7310f7e6f31405b62526b2af] <==
	W0617 12:07:44.532314       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0617 12:07:44.532364       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 12:07:44.565677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0617 12:07:44.565721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0617 12:07:44.620468       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0617 12:07:44.620495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0617 12:07:44.658021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0617 12:07:44.658132       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0617 12:07:44.689495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0617 12:07:44.689649       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0617 12:07:44.743239       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0617 12:07:44.743311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0617 12:07:44.745941       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0617 12:07:44.745962       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0617 12:07:44.799121       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0617 12:07:44.799254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0617 12:07:44.802706       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0617 12:07:44.802898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0617 12:07:44.844998       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0617 12:07:44.845169       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0617 12:07:44.952326       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0617 12:07:44.953114       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0617 12:07:45.046099       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0617 12:07:45.046250       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0617 12:07:47.114628       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 17 12:19:46 no-preload-152830 kubelet[4382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 12:19:46 no-preload-152830 kubelet[4382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 12:19:47 no-preload-152830 kubelet[4382]: E0617 12:19:47.519474    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:20:00 no-preload-152830 kubelet[4382]: E0617 12:20:00.522785    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:20:12 no-preload-152830 kubelet[4382]: E0617 12:20:12.522214    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:20:27 no-preload-152830 kubelet[4382]: E0617 12:20:27.519324    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:20:42 no-preload-152830 kubelet[4382]: E0617 12:20:42.520673    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:20:46 no-preload-152830 kubelet[4382]: E0617 12:20:46.541415    4382 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 12:20:46 no-preload-152830 kubelet[4382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 12:20:46 no-preload-152830 kubelet[4382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 12:20:46 no-preload-152830 kubelet[4382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 12:20:46 no-preload-152830 kubelet[4382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 12:20:57 no-preload-152830 kubelet[4382]: E0617 12:20:57.518592    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:21:11 no-preload-152830 kubelet[4382]: E0617 12:21:11.519214    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:21:26 no-preload-152830 kubelet[4382]: E0617 12:21:26.521392    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:21:38 no-preload-152830 kubelet[4382]: E0617 12:21:38.523062    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:21:46 no-preload-152830 kubelet[4382]: E0617 12:21:46.541472    4382 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 17 12:21:46 no-preload-152830 kubelet[4382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 17 12:21:46 no-preload-152830 kubelet[4382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 17 12:21:46 no-preload-152830 kubelet[4382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 17 12:21:46 no-preload-152830 kubelet[4382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 17 12:21:51 no-preload-152830 kubelet[4382]: E0617 12:21:51.519213    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:22:02 no-preload-152830 kubelet[4382]: E0617 12:22:02.520797    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:22:15 no-preload-152830 kubelet[4382]: E0617 12:22:15.519414    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	Jun 17 12:22:26 no-preload-152830 kubelet[4382]: E0617 12:22:26.519467    4382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zllzk" podUID="e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1"
	
	
	==> storage-provisioner [9bded990e0ce1c6be7f1b1465276f4a8754154adf288c943ec48740d65f95d32] <==
	I0617 12:08:02.054480       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0617 12:08:02.071455       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0617 12:08:02.071556       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0617 12:08:02.091682       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0617 12:08:02.091861       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-152830_6e663f14-4907-4466-bca1-b193c05941a1!
	I0617 12:08:02.092839       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8e075b74-d8e9-4bee-bf4a-cef017cda12a", APIVersion:"v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-152830_6e663f14-4907-4466-bca1-b193c05941a1 became leader
	I0617 12:08:02.193941       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-152830_6e663f14-4907-4466-bca1-b193c05941a1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-152830 -n no-preload-152830
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-152830 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-zllzk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-152830 describe pod metrics-server-569cc877fc-zllzk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-152830 describe pod metrics-server-569cc877fc-zllzk: exit status 1 (77.195155ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-zllzk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-152830 describe pod metrics-server-569cc877fc-zllzk: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (322.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (179.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
E0617 12:19:54.220819  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
E0617 12:21:51.169817  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.164:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.164:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-003661 -n old-k8s-version-003661
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-003661 -n old-k8s-version-003661: exit status 2 (231.499358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-003661" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-003661 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-003661 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.493µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-003661 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003661 -n old-k8s-version-003661
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003661 -n old-k8s-version-003661: exit status 2 (222.818688ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-003661 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-003661 logs -n 25: (1.658738444s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-514753                              | cert-expiration-514753       | jenkins | v1.33.1 | 17 Jun 24 11:52 UTC | 17 Jun 24 11:52 UTC |
	| start   | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:52 UTC | 17 Jun 24 11:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-152830             | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-152830                                   | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-136195            | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:54 UTC | 17 Jun 24 11:55 UTC |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-717156                           | kubernetes-upgrade-717156    | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	| delete  | -p                                                     | disable-driver-mounts-960277 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:55 UTC |
	|         | disable-driver-mounts-960277                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:55 UTC | 17 Jun 24 11:56 UTC |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-152830                  | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-152830                                   | no-preload-152830            | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC | 17 Jun 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-136195                 | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-003661        | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-136195                                  | embed-certs-136195           | jenkins | v1.33.1 | 17 Jun 24 11:56 UTC | 17 Jun 24 12:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-991309  | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:57 UTC | 17 Jun 24 11:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:57 UTC |                     |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-003661                              | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC | 17 Jun 24 11:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-003661             | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC | 17 Jun 24 11:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-003661                              | old-k8s-version-003661       | jenkins | v1.33.1 | 17 Jun 24 11:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-991309       | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991309 | jenkins | v1.33.1 | 17 Jun 24 11:59 UTC | 17 Jun 24 12:06 UTC |
	|         | default-k8s-diff-port-991309                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 11:59:37
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 11:59:37.428028  166103 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:59:37.428266  166103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:59:37.428274  166103 out.go:304] Setting ErrFile to fd 2...
	I0617 11:59:37.428279  166103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:59:37.428472  166103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:59:37.429026  166103 out.go:298] Setting JSON to false
	I0617 11:59:37.429968  166103 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6124,"bootTime":1718619453,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:59:37.430026  166103 start.go:139] virtualization: kvm guest
	I0617 11:59:37.432171  166103 out.go:177] * [default-k8s-diff-port-991309] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:59:37.433521  166103 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:59:37.433548  166103 notify.go:220] Checking for updates...
	I0617 11:59:37.434850  166103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:59:37.436099  166103 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:59:37.437362  166103 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:59:37.438535  166103 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:59:37.439644  166103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:59:37.441113  166103 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:59:37.441563  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:59:37.441645  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:59:37.456875  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45565
	I0617 11:59:37.457306  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:59:37.457839  166103 main.go:141] libmachine: Using API Version  1
	I0617 11:59:37.457861  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:59:37.458188  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:59:37.458381  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 11:59:37.458626  166103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:59:37.458927  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:59:37.458971  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:59:37.474024  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45165
	I0617 11:59:37.474411  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:59:37.474873  166103 main.go:141] libmachine: Using API Version  1
	I0617 11:59:37.474899  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:59:37.475199  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:59:37.475383  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 11:59:37.507955  166103 out.go:177] * Using the kvm2 driver based on existing profile
	I0617 11:59:37.509134  166103 start.go:297] selected driver: kvm2
	I0617 11:59:37.509148  166103 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:59:37.509249  166103 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:59:37.509927  166103 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:59:37.510004  166103 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 11:59:37.525340  166103 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 11:59:37.525701  166103 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:59:37.525761  166103 cni.go:84] Creating CNI manager for ""
	I0617 11:59:37.525779  166103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 11:59:37.525812  166103 start.go:340] cluster config:
	{Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:59:37.525910  166103 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:59:37.527756  166103 out.go:177] * Starting "default-k8s-diff-port-991309" primary control-plane node in "default-k8s-diff-port-991309" cluster
	I0617 11:59:36.391800  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:37.529104  166103 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 11:59:37.529159  166103 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0617 11:59:37.529171  166103 cache.go:56] Caching tarball of preloaded images
	I0617 11:59:37.529246  166103 preload.go:173] Found /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0617 11:59:37.529256  166103 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0617 11:59:37.529368  166103 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/config.json ...
	I0617 11:59:37.529565  166103 start.go:360] acquireMachinesLock for default-k8s-diff-port-991309: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 11:59:42.471684  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:45.543735  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:51.623725  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 11:59:54.695811  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:00.775775  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:03.847736  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:09.927768  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:12.999728  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:19.079809  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:22.151737  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:28.231763  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:31.303775  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:37.383783  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:40.455809  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:46.535757  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:49.607769  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:55.687772  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:00:58.759722  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:01:04.839736  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:01:07.911780  164809 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.173:22: connect: no route to host
	I0617 12:01:10.916735  165060 start.go:364] duration metric: took 4m27.471308215s to acquireMachinesLock for "embed-certs-136195"
	I0617 12:01:10.916814  165060 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:10.916827  165060 fix.go:54] fixHost starting: 
	I0617 12:01:10.917166  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:10.917203  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:10.932217  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43235
	I0617 12:01:10.932742  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:10.933241  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:10.933261  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:10.933561  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:10.933766  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:10.933939  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:10.935452  165060 fix.go:112] recreateIfNeeded on embed-certs-136195: state=Stopped err=<nil>
	I0617 12:01:10.935660  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	W0617 12:01:10.935831  165060 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:10.937510  165060 out.go:177] * Restarting existing kvm2 VM for "embed-certs-136195" ...
	I0617 12:01:10.938708  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Start
	I0617 12:01:10.938873  165060 main.go:141] libmachine: (embed-certs-136195) Ensuring networks are active...
	I0617 12:01:10.939602  165060 main.go:141] libmachine: (embed-certs-136195) Ensuring network default is active
	I0617 12:01:10.939896  165060 main.go:141] libmachine: (embed-certs-136195) Ensuring network mk-embed-certs-136195 is active
	I0617 12:01:10.940260  165060 main.go:141] libmachine: (embed-certs-136195) Getting domain xml...
	I0617 12:01:10.940881  165060 main.go:141] libmachine: (embed-certs-136195) Creating domain...
	I0617 12:01:12.136267  165060 main.go:141] libmachine: (embed-certs-136195) Waiting to get IP...
	I0617 12:01:12.137303  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:12.137692  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:12.137777  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:12.137684  166451 retry.go:31] will retry after 261.567272ms: waiting for machine to come up
	I0617 12:01:12.401390  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:12.401845  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:12.401873  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:12.401816  166451 retry.go:31] will retry after 332.256849ms: waiting for machine to come up
	I0617 12:01:12.735421  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:12.735842  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:12.735872  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:12.735783  166451 retry.go:31] will retry after 457.313241ms: waiting for machine to come up
	I0617 12:01:13.194621  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:13.195073  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:13.195091  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:13.195036  166451 retry.go:31] will retry after 539.191177ms: waiting for machine to come up
	I0617 12:01:10.914315  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:10.914353  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:01:10.914690  164809 buildroot.go:166] provisioning hostname "no-preload-152830"
	I0617 12:01:10.914716  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:01:10.914905  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:01:10.916557  164809 machine.go:97] duration metric: took 4m37.418351206s to provisionDockerMachine
	I0617 12:01:10.916625  164809 fix.go:56] duration metric: took 4m37.438694299s for fixHost
	I0617 12:01:10.916634  164809 start.go:83] releasing machines lock for "no-preload-152830", held for 4m37.438726092s
	W0617 12:01:10.916653  164809 start.go:713] error starting host: provision: host is not running
	W0617 12:01:10.916750  164809 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0617 12:01:10.916763  164809 start.go:728] Will try again in 5 seconds ...
	I0617 12:01:13.735708  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:13.736155  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:13.736184  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:13.736096  166451 retry.go:31] will retry after 754.965394ms: waiting for machine to come up
	I0617 12:01:14.493211  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:14.493598  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:14.493628  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:14.493544  166451 retry.go:31] will retry after 786.125188ms: waiting for machine to come up
	I0617 12:01:15.281505  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:15.281975  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:15.282008  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:15.281939  166451 retry.go:31] will retry after 1.091514617s: waiting for machine to come up
	I0617 12:01:16.375391  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:16.375904  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:16.375935  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:16.375820  166451 retry.go:31] will retry after 1.34601641s: waiting for machine to come up
	I0617 12:01:17.724108  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:17.724453  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:17.724477  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:17.724418  166451 retry.go:31] will retry after 1.337616605s: waiting for machine to come up
	I0617 12:01:15.918256  164809 start.go:360] acquireMachinesLock for no-preload-152830: {Name:mk519b8956d160a9d2b042f25b899a5ee0efa72e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 12:01:19.063677  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:19.064210  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:19.064243  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:19.064144  166451 retry.go:31] will retry after 1.914267639s: waiting for machine to come up
	I0617 12:01:20.979644  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:20.980124  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:20.980150  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:20.980072  166451 retry.go:31] will retry after 2.343856865s: waiting for machine to come up
	I0617 12:01:23.326506  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:23.326878  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:23.326922  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:23.326861  166451 retry.go:31] will retry after 2.450231017s: waiting for machine to come up
	I0617 12:01:25.780501  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:25.780886  165060 main.go:141] libmachine: (embed-certs-136195) DBG | unable to find current IP address of domain embed-certs-136195 in network mk-embed-certs-136195
	I0617 12:01:25.780913  165060 main.go:141] libmachine: (embed-certs-136195) DBG | I0617 12:01:25.780825  166451 retry.go:31] will retry after 3.591107926s: waiting for machine to come up
	I0617 12:01:30.728529  165698 start.go:364] duration metric: took 3m12.647041864s to acquireMachinesLock for "old-k8s-version-003661"
	I0617 12:01:30.728602  165698 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:30.728613  165698 fix.go:54] fixHost starting: 
	I0617 12:01:30.729036  165698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:30.729090  165698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:30.746528  165698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0617 12:01:30.746982  165698 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:30.747493  165698 main.go:141] libmachine: Using API Version  1
	I0617 12:01:30.747516  165698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:30.747847  165698 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:30.748060  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:30.748186  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetState
	I0617 12:01:30.750035  165698 fix.go:112] recreateIfNeeded on old-k8s-version-003661: state=Stopped err=<nil>
	I0617 12:01:30.750072  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	W0617 12:01:30.750206  165698 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:30.752196  165698 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-003661" ...
	I0617 12:01:29.375875  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.376372  165060 main.go:141] libmachine: (embed-certs-136195) Found IP for machine: 192.168.72.199
	I0617 12:01:29.376407  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has current primary IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.376430  165060 main.go:141] libmachine: (embed-certs-136195) Reserving static IP address...
	I0617 12:01:29.376754  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "embed-certs-136195", mac: "52:54:00:f2:27:84", ip: "192.168.72.199"} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.376788  165060 main.go:141] libmachine: (embed-certs-136195) Reserved static IP address: 192.168.72.199
	I0617 12:01:29.376800  165060 main.go:141] libmachine: (embed-certs-136195) DBG | skip adding static IP to network mk-embed-certs-136195 - found existing host DHCP lease matching {name: "embed-certs-136195", mac: "52:54:00:f2:27:84", ip: "192.168.72.199"}
	I0617 12:01:29.376811  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Getting to WaitForSSH function...
	I0617 12:01:29.376820  165060 main.go:141] libmachine: (embed-certs-136195) Waiting for SSH to be available...
	I0617 12:01:29.378811  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.379121  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.379151  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.379289  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Using SSH client type: external
	I0617 12:01:29.379321  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa (-rw-------)
	I0617 12:01:29.379354  165060 main.go:141] libmachine: (embed-certs-136195) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.199 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:01:29.379368  165060 main.go:141] libmachine: (embed-certs-136195) DBG | About to run SSH command:
	I0617 12:01:29.379381  165060 main.go:141] libmachine: (embed-certs-136195) DBG | exit 0
	I0617 12:01:29.503819  165060 main.go:141] libmachine: (embed-certs-136195) DBG | SSH cmd err, output: <nil>: 
	I0617 12:01:29.504207  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetConfigRaw
	I0617 12:01:29.504827  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:29.507277  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.507601  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.507635  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.507878  165060 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/config.json ...
	I0617 12:01:29.508102  165060 machine.go:94] provisionDockerMachine start ...
	I0617 12:01:29.508125  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:29.508333  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.510390  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.510636  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.510656  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.510761  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:29.510924  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.511082  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.511242  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:29.511404  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:29.511665  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:29.511680  165060 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:01:29.611728  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:01:29.611759  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetMachineName
	I0617 12:01:29.611996  165060 buildroot.go:166] provisioning hostname "embed-certs-136195"
	I0617 12:01:29.612025  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetMachineName
	I0617 12:01:29.612194  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.614719  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.615085  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.615110  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.615251  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:29.615425  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.615565  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.615685  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:29.615881  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:29.616066  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:29.616084  165060 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-136195 && echo "embed-certs-136195" | sudo tee /etc/hostname
	I0617 12:01:29.729321  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-136195
	
	I0617 12:01:29.729347  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.731968  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.732314  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.732352  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.732582  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:29.732820  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.733001  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:29.733157  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:29.733312  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:29.733471  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:29.733487  165060 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-136195' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-136195/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-136195' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:01:29.840083  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:29.840110  165060 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:01:29.840145  165060 buildroot.go:174] setting up certificates
	I0617 12:01:29.840180  165060 provision.go:84] configureAuth start
	I0617 12:01:29.840199  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetMachineName
	I0617 12:01:29.840488  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:29.843096  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.843446  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.843487  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.843687  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:29.845627  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.845914  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:29.845940  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:29.846021  165060 provision.go:143] copyHostCerts
	I0617 12:01:29.846096  165060 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:01:29.846106  165060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:01:29.846171  165060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:01:29.846267  165060 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:01:29.846275  165060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:01:29.846298  165060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:01:29.846359  165060 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:01:29.846366  165060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:01:29.846387  165060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:01:29.846456  165060 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.embed-certs-136195 san=[127.0.0.1 192.168.72.199 embed-certs-136195 localhost minikube]
	I0617 12:01:30.076596  165060 provision.go:177] copyRemoteCerts
	I0617 12:01:30.076657  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:01:30.076686  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.079269  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.079565  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.079588  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.079785  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.080016  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.080189  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.080316  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.161615  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:01:30.188790  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0617 12:01:30.215171  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:01:30.241310  165060 provision.go:87] duration metric: took 401.115469ms to configureAuth
	I0617 12:01:30.241332  165060 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:01:30.241529  165060 config.go:182] Loaded profile config "embed-certs-136195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:01:30.241602  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.244123  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.244427  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.244459  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.244584  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.244793  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.244999  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.245174  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.245340  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:30.245497  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:30.245512  165060 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:01:30.498156  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:01:30.498189  165060 machine.go:97] duration metric: took 990.071076ms to provisionDockerMachine
	I0617 12:01:30.498201  165060 start.go:293] postStartSetup for "embed-certs-136195" (driver="kvm2")
	I0617 12:01:30.498214  165060 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:01:30.498238  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.498580  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:01:30.498605  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.501527  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.501912  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.501941  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.502054  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.502257  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.502423  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.502578  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.583151  165060 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:01:30.587698  165060 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:01:30.587722  165060 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:01:30.587819  165060 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:01:30.587940  165060 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:01:30.588078  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:01:30.598234  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:30.622580  165060 start.go:296] duration metric: took 124.363651ms for postStartSetup
	I0617 12:01:30.622621  165060 fix.go:56] duration metric: took 19.705796191s for fixHost
	I0617 12:01:30.622645  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.625226  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.625637  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.625684  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.625821  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.626040  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.626229  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.626418  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.626613  165060 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:30.626839  165060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0617 12:01:30.626862  165060 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:01:30.728365  165060 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625690.704643527
	
	I0617 12:01:30.728389  165060 fix.go:216] guest clock: 1718625690.704643527
	I0617 12:01:30.728396  165060 fix.go:229] Guest: 2024-06-17 12:01:30.704643527 +0000 UTC Remote: 2024-06-17 12:01:30.622625631 +0000 UTC m=+287.310804086 (delta=82.017896ms)
	I0617 12:01:30.728416  165060 fix.go:200] guest clock delta is within tolerance: 82.017896ms
	I0617 12:01:30.728421  165060 start.go:83] releasing machines lock for "embed-certs-136195", held for 19.811634749s
	I0617 12:01:30.728445  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.728763  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:30.731414  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.731784  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.731816  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.731937  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.732504  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.732704  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:30.732761  165060 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:01:30.732826  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.732964  165060 ssh_runner.go:195] Run: cat /version.json
	I0617 12:01:30.732991  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:30.735854  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736049  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736278  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.736310  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:30.736334  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736397  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:30.736579  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.736653  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:30.736777  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.736959  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.736972  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:30.737131  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.737188  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:30.737356  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:30.844295  165060 ssh_runner.go:195] Run: systemctl --version
	I0617 12:01:30.851958  165060 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:01:31.000226  165060 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:01:31.008322  165060 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:01:31.008397  165060 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:01:31.029520  165060 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:01:31.029547  165060 start.go:494] detecting cgroup driver to use...
	I0617 12:01:31.029617  165060 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:01:31.045505  165060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:01:31.059851  165060 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:01:31.059920  165060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:01:31.075011  165060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:01:31.089705  165060 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:01:31.204300  165060 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:01:31.342204  165060 docker.go:233] disabling docker service ...
	I0617 12:01:31.342290  165060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:01:31.356945  165060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:01:31.369786  165060 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:01:31.505817  165060 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:01:31.631347  165060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:01:31.646048  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:01:31.664854  165060 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:01:31.664923  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.677595  165060 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:01:31.677678  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.690164  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.701482  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.712488  165060 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:01:31.723994  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.736805  165060 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.755001  165060 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:31.767226  165060 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:01:31.777894  165060 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:01:31.777954  165060 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:01:31.792644  165060 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:01:31.803267  165060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:31.920107  165060 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:01:32.067833  165060 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:01:32.067904  165060 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:01:32.072818  165060 start.go:562] Will wait 60s for crictl version
	I0617 12:01:32.072881  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:01:32.076782  165060 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:01:32.116635  165060 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:01:32.116709  165060 ssh_runner.go:195] Run: crio --version
	I0617 12:01:32.148094  165060 ssh_runner.go:195] Run: crio --version
	I0617 12:01:32.176924  165060 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:01:30.753437  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .Start
	I0617 12:01:30.753608  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring networks are active...
	I0617 12:01:30.754272  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring network default is active
	I0617 12:01:30.754600  165698 main.go:141] libmachine: (old-k8s-version-003661) Ensuring network mk-old-k8s-version-003661 is active
	I0617 12:01:30.754967  165698 main.go:141] libmachine: (old-k8s-version-003661) Getting domain xml...
	I0617 12:01:30.755739  165698 main.go:141] libmachine: (old-k8s-version-003661) Creating domain...
	I0617 12:01:32.029080  165698 main.go:141] libmachine: (old-k8s-version-003661) Waiting to get IP...
	I0617 12:01:32.029902  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.030401  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.030477  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.030384  166594 retry.go:31] will retry after 191.846663ms: waiting for machine to come up
	I0617 12:01:32.223912  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.224300  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.224328  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.224276  166594 retry.go:31] will retry after 341.806498ms: waiting for machine to come up
	I0617 12:01:32.568066  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.568648  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.568682  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.568575  166594 retry.go:31] will retry after 359.779948ms: waiting for machine to come up
	I0617 12:01:32.930210  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:32.930652  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:32.930675  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:32.930604  166594 retry.go:31] will retry after 548.549499ms: waiting for machine to come up
	I0617 12:01:32.178076  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetIP
	I0617 12:01:32.181127  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:32.181524  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:32.181553  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:32.181778  165060 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0617 12:01:32.186998  165060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:32.203033  165060 kubeadm.go:877] updating cluster {Name:embed-certs-136195 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-136195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:01:32.203142  165060 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:01:32.203183  165060 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:32.245712  165060 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:01:32.245796  165060 ssh_runner.go:195] Run: which lz4
	I0617 12:01:32.250113  165060 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:01:32.254486  165060 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:01:32.254511  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0617 12:01:33.480493  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:33.480965  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:33.481004  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:33.480931  166594 retry.go:31] will retry after 636.044066ms: waiting for machine to come up
	I0617 12:01:34.118880  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:34.119361  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:34.119394  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:34.119299  166594 retry.go:31] will retry after 637.085777ms: waiting for machine to come up
	I0617 12:01:34.757614  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:34.758097  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:34.758126  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:34.758051  166594 retry.go:31] will retry after 921.652093ms: waiting for machine to come up
	I0617 12:01:35.681846  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:35.682324  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:35.682351  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:35.682269  166594 retry.go:31] will retry after 1.1106801s: waiting for machine to come up
	I0617 12:01:36.794411  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:36.794845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:36.794869  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:36.794793  166594 retry.go:31] will retry after 1.323395845s: waiting for machine to come up
	I0617 12:01:33.776867  165060 crio.go:462] duration metric: took 1.526763522s to copy over tarball
	I0617 12:01:33.776955  165060 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:01:35.994216  165060 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.217222149s)
	I0617 12:01:35.994246  165060 crio.go:469] duration metric: took 2.217348025s to extract the tarball
	I0617 12:01:35.994255  165060 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:01:36.034978  165060 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:36.087255  165060 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 12:01:36.087281  165060 cache_images.go:84] Images are preloaded, skipping loading
	I0617 12:01:36.087291  165060 kubeadm.go:928] updating node { 192.168.72.199 8443 v1.30.1 crio true true} ...
	I0617 12:01:36.087447  165060 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-136195 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-136195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:01:36.087551  165060 ssh_runner.go:195] Run: crio config
	I0617 12:01:36.130409  165060 cni.go:84] Creating CNI manager for ""
	I0617 12:01:36.130433  165060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:36.130449  165060 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:01:36.130479  165060 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.199 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-136195 NodeName:embed-certs-136195 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:01:36.130633  165060 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-136195"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.199
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.199"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:01:36.130724  165060 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:01:36.141027  165060 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:01:36.141110  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:01:36.150748  165060 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0617 12:01:36.167282  165060 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:01:36.183594  165060 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0617 12:01:36.202494  165060 ssh_runner.go:195] Run: grep 192.168.72.199	control-plane.minikube.internal$ /etc/hosts
	I0617 12:01:36.206515  165060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:36.218598  165060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:36.344280  165060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:36.361127  165060 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195 for IP: 192.168.72.199
	I0617 12:01:36.361152  165060 certs.go:194] generating shared ca certs ...
	I0617 12:01:36.361172  165060 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:36.361370  165060 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:01:36.361425  165060 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:01:36.361438  165060 certs.go:256] generating profile certs ...
	I0617 12:01:36.361557  165060 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/client.key
	I0617 12:01:36.361648  165060 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/apiserver.key.f7068429
	I0617 12:01:36.361696  165060 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/proxy-client.key
	I0617 12:01:36.361863  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:01:36.361913  165060 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:01:36.361925  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:01:36.361951  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:01:36.361984  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:01:36.362005  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:01:36.362041  165060 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:36.362770  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:01:36.397257  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:01:36.422523  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:01:36.451342  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:01:36.485234  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0617 12:01:36.514351  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:01:36.544125  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:01:36.567574  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/embed-certs-136195/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:01:36.590417  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:01:36.613174  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:01:36.636187  165060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:01:36.659365  165060 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:01:36.675981  165060 ssh_runner.go:195] Run: openssl version
	I0617 12:01:36.681694  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:01:36.692324  165060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:01:36.696871  165060 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:01:36.696938  165060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:01:36.702794  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:01:36.713372  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:01:36.724054  165060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:01:36.728505  165060 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:01:36.728566  165060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:01:36.734082  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:01:36.744542  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:01:36.755445  165060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:36.759880  165060 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:36.759922  165060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:36.765367  165060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:01:36.776234  165060 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:01:36.780822  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:01:36.786895  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:01:36.793358  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:01:36.800187  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:01:36.806591  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:01:36.812681  165060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:01:36.818814  165060 kubeadm.go:391] StartCluster: {Name:embed-certs-136195 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-136195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:01:36.818903  165060 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:01:36.818945  165060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:36.861839  165060 cri.go:89] found id: ""
	I0617 12:01:36.861920  165060 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:01:36.873500  165060 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:01:36.873529  165060 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:01:36.873551  165060 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:01:36.873602  165060 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:01:36.884767  165060 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:01:36.886013  165060 kubeconfig.go:125] found "embed-certs-136195" server: "https://192.168.72.199:8443"
	I0617 12:01:36.888144  165060 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:01:36.899204  165060 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.199
	I0617 12:01:36.899248  165060 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:01:36.899263  165060 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:01:36.899325  165060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:36.941699  165060 cri.go:89] found id: ""
	I0617 12:01:36.941782  165060 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:01:36.960397  165060 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:01:36.971254  165060 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:01:36.971276  165060 kubeadm.go:156] found existing configuration files:
	
	I0617 12:01:36.971333  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:01:36.981367  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:01:36.981448  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:01:36.991878  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:01:37.001741  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:01:37.001816  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:01:37.012170  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:01:37.021914  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:01:37.021979  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:01:37.031866  165060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:01:37.041657  165060 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:01:37.041706  165060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:01:37.051440  165060 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:01:37.062543  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:37.175190  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:37.872053  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:38.085732  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:38.146895  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:38.208633  165060 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:01:38.208898  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:38.119805  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:38.297858  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:38.297905  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:38.120293  166594 retry.go:31] will retry after 1.769592858s: waiting for machine to come up
	I0617 12:01:39.892495  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:39.893035  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:39.893065  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:39.892948  166594 retry.go:31] will retry after 1.954570801s: waiting for machine to come up
	I0617 12:01:41.849587  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:41.850111  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:41.850140  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:41.850067  166594 retry.go:31] will retry after 3.44879626s: waiting for machine to come up
	I0617 12:01:38.708936  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:39.209014  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:39.709765  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:39.728309  165060 api_server.go:72] duration metric: took 1.519672652s to wait for apiserver process to appear ...
	I0617 12:01:39.728342  165060 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:01:39.728369  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:42.756054  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:01:42.756089  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:01:42.756105  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:42.797646  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:01:42.797689  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:01:43.229201  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:43.233440  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:01:43.233467  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:01:43.728490  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:43.741000  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:01:43.741037  165060 api_server.go:103] status: https://192.168.72.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:01:44.228634  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:01:44.232839  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 200:
	ok
	I0617 12:01:44.238582  165060 api_server.go:141] control plane version: v1.30.1
	I0617 12:01:44.238606  165060 api_server.go:131] duration metric: took 4.510256755s to wait for apiserver health ...
	I0617 12:01:44.238615  165060 cni.go:84] Creating CNI manager for ""
	I0617 12:01:44.238622  165060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:44.240569  165060 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:01:44.241963  165060 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:01:44.253143  165060 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:01:44.286772  165060 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:01:44.295697  165060 system_pods.go:59] 8 kube-system pods found
	I0617 12:01:44.295736  165060 system_pods.go:61] "coredns-7db6d8ff4d-9bbjg" [1ba0eee5-436e-4c83-b5ce-3c907d66b641] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0617 12:01:44.295744  165060 system_pods.go:61] "etcd-embed-certs-136195" [6dc81a80-c56b-4517-af82-c450cf9578f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0617 12:01:44.295757  165060 system_pods.go:61] "kube-apiserver-embed-certs-136195" [bd61a715-2471-4dca-aa48-a157531ebd6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 12:01:44.295763  165060 system_pods.go:61] "kube-controller-manager-embed-certs-136195" [194db4b0-75c2-4905-8e4d-813185497b51] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 12:01:44.295768  165060 system_pods.go:61] "kube-proxy-25d5n" [52b6d09a-899f-40c4-b1f3-7842ae755165] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0617 12:01:44.295774  165060 system_pods.go:61] "kube-scheduler-embed-certs-136195" [b04d3798-f465-4f82-9ec7-777ea62d5b94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 12:01:44.295782  165060 system_pods.go:61] "metrics-server-569cc877fc-dmhfs" [31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:01:44.295788  165060 system_pods.go:61] "storage-provisioner" [4b04a38a-5006-4496-a24d-0940029193de] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 12:01:44.295797  165060 system_pods.go:74] duration metric: took 9.004741ms to wait for pod list to return data ...
	I0617 12:01:44.295811  165060 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:01:44.298934  165060 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:01:44.298968  165060 node_conditions.go:123] node cpu capacity is 2
	I0617 12:01:44.298989  165060 node_conditions.go:105] duration metric: took 3.172465ms to run NodePressure ...
	I0617 12:01:44.299027  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:44.565943  165060 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 12:01:44.570796  165060 kubeadm.go:733] kubelet initialised
	I0617 12:01:44.570825  165060 kubeadm.go:734] duration metric: took 4.851024ms waiting for restarted kubelet to initialise ...
	I0617 12:01:44.570836  165060 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:01:44.575565  165060 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.582180  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.582209  165060 pod_ready.go:81] duration metric: took 6.620747ms for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.582221  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.582231  165060 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.586828  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "etcd-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.586850  165060 pod_ready.go:81] duration metric: took 4.61059ms for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.586859  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "etcd-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.586866  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.591162  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.591189  165060 pod_ready.go:81] duration metric: took 4.316651ms for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.591197  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.591204  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:44.690269  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.690301  165060 pod_ready.go:81] duration metric: took 99.088803ms for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:44.690310  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:44.690317  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:45.089616  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-proxy-25d5n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.089640  165060 pod_ready.go:81] duration metric: took 399.31511ms for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:45.089649  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-proxy-25d5n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.089656  165060 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:45.491031  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.491058  165060 pod_ready.go:81] duration metric: took 401.395966ms for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:45.491068  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.491074  165060 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:45.890606  165060 pod_ready.go:97] node "embed-certs-136195" hosting pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.890633  165060 pod_ready.go:81] duration metric: took 399.550946ms for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	E0617 12:01:45.890644  165060 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-136195" hosting pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:45.890650  165060 pod_ready.go:38] duration metric: took 1.319802914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:01:45.890669  165060 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:01:45.903900  165060 ops.go:34] apiserver oom_adj: -16
	I0617 12:01:45.903936  165060 kubeadm.go:591] duration metric: took 9.03037731s to restartPrimaryControlPlane
	I0617 12:01:45.903950  165060 kubeadm.go:393] duration metric: took 9.085142288s to StartCluster
	I0617 12:01:45.903974  165060 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:45.904063  165060 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:01:45.905636  165060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:45.905908  165060 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:01:45.907817  165060 out.go:177] * Verifying Kubernetes components...
	I0617 12:01:45.905981  165060 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:01:45.907852  165060 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-136195"
	I0617 12:01:45.907880  165060 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-136195"
	W0617 12:01:45.907890  165060 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:01:45.907903  165060 addons.go:69] Setting default-storageclass=true in profile "embed-certs-136195"
	I0617 12:01:45.906085  165060 config.go:182] Loaded profile config "embed-certs-136195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:01:45.909296  165060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:45.907923  165060 host.go:66] Checking if "embed-certs-136195" exists ...
	I0617 12:01:45.907924  165060 addons.go:69] Setting metrics-server=true in profile "embed-certs-136195"
	I0617 12:01:45.909472  165060 addons.go:234] Setting addon metrics-server=true in "embed-certs-136195"
	W0617 12:01:45.909481  165060 addons.go:243] addon metrics-server should already be in state true
	I0617 12:01:45.909506  165060 host.go:66] Checking if "embed-certs-136195" exists ...
	I0617 12:01:45.907954  165060 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-136195"
	I0617 12:01:45.909776  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.909822  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.909836  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.909861  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.909841  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.909928  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.925250  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36545
	I0617 12:01:45.925500  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38767
	I0617 12:01:45.925708  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.925929  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.926262  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.926282  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.926420  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.926445  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.926637  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.926728  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.927142  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.927171  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.927206  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.927236  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.929198  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
	I0617 12:01:45.929658  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.930137  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.930159  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.930465  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.930661  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.934085  165060 addons.go:234] Setting addon default-storageclass=true in "embed-certs-136195"
	W0617 12:01:45.934107  165060 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:01:45.934139  165060 host.go:66] Checking if "embed-certs-136195" exists ...
	I0617 12:01:45.934534  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.934579  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.944472  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I0617 12:01:45.945034  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.945712  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.945741  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.946105  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.946343  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.946673  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43225
	I0617 12:01:45.947007  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.947706  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.947725  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.948027  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.948228  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.948359  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:45.950451  165060 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:01:45.951705  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:01:45.951719  165060 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:01:45.951735  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:45.949626  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:45.951588  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43695
	I0617 12:01:45.953222  165060 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:45.954471  165060 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:01:45.952290  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.954494  165060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:01:45.954514  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:45.955079  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.955098  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.955123  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.955478  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.955718  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:45.955757  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.955924  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:45.956099  165060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:45.956106  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:45.956147  165060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:45.956374  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:45.956507  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:45.957756  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.958184  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:45.958206  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.958335  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:45.958505  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:45.958680  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:45.958825  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:45.977247  165060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39751
	I0617 12:01:45.977663  165060 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:45.978179  165060 main.go:141] libmachine: Using API Version  1
	I0617 12:01:45.978203  165060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:45.978524  165060 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:45.978711  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetState
	I0617 12:01:45.980425  165060 main.go:141] libmachine: (embed-certs-136195) Calling .DriverName
	I0617 12:01:45.980601  165060 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:01:45.980616  165060 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:01:45.980630  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHHostname
	I0617 12:01:45.983633  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.984088  165060 main.go:141] libmachine: (embed-certs-136195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:27:84", ip: ""} in network mk-embed-certs-136195: {Iface:virbr4 ExpiryTime:2024-06-17 13:01:21 +0000 UTC Type:0 Mac:52:54:00:f2:27:84 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:embed-certs-136195 Clientid:01:52:54:00:f2:27:84}
	I0617 12:01:45.984105  165060 main.go:141] libmachine: (embed-certs-136195) DBG | domain embed-certs-136195 has defined IP address 192.168.72.199 and MAC address 52:54:00:f2:27:84 in network mk-embed-certs-136195
	I0617 12:01:45.984258  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHPort
	I0617 12:01:45.984377  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHKeyPath
	I0617 12:01:45.984505  165060 main.go:141] libmachine: (embed-certs-136195) Calling .GetSSHUsername
	I0617 12:01:45.984661  165060 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/embed-certs-136195/id_rsa Username:docker}
	I0617 12:01:46.093292  165060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:46.112779  165060 node_ready.go:35] waiting up to 6m0s for node "embed-certs-136195" to be "Ready" ...
	I0617 12:01:46.182239  165060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:01:46.248534  165060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:01:46.286637  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:01:46.286662  165060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:01:46.313951  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:01:46.313981  165060 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:01:46.337155  165060 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:01:46.337186  165060 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:01:46.389025  165060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:01:46.548086  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:46.548106  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:46.548442  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:46.548461  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:46.548471  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:46.548481  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:46.548485  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:46.548727  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:46.548744  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:46.548764  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:46.554199  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:46.554218  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:46.554454  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:46.554469  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:46.554480  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.142290  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.142321  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.142629  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.142658  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.142671  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.142676  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.142692  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.142943  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.142971  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.142985  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.216339  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.216366  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.216658  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.216679  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.216690  165060 main.go:141] libmachine: Making call to close driver server
	I0617 12:01:47.216700  165060 main.go:141] libmachine: (embed-certs-136195) Calling .Close
	I0617 12:01:47.216709  165060 main.go:141] libmachine: (embed-certs-136195) DBG | Closing plugin on server side
	I0617 12:01:47.216931  165060 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:01:47.216967  165060 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:01:47.216982  165060 addons.go:475] Verifying addon metrics-server=true in "embed-certs-136195"
	I0617 12:01:47.219627  165060 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0617 12:01:45.300413  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:45.300848  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | unable to find current IP address of domain old-k8s-version-003661 in network mk-old-k8s-version-003661
	I0617 12:01:45.300878  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | I0617 12:01:45.300794  166594 retry.go:31] will retry after 3.892148485s: waiting for machine to come up
	I0617 12:01:47.220905  165060 addons.go:510] duration metric: took 1.314925386s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0617 12:01:48.116197  165060 node_ready.go:53] node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:50.500448  166103 start.go:364] duration metric: took 2m12.970832528s to acquireMachinesLock for "default-k8s-diff-port-991309"
	I0617 12:01:50.500511  166103 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:01:50.500534  166103 fix.go:54] fixHost starting: 
	I0617 12:01:50.500980  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:01:50.501018  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:01:50.517593  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43641
	I0617 12:01:50.518035  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:01:50.518600  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:01:50.518635  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:01:50.519051  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:01:50.519296  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:01:50.519502  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:01:50.521095  166103 fix.go:112] recreateIfNeeded on default-k8s-diff-port-991309: state=Stopped err=<nil>
	I0617 12:01:50.521123  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	W0617 12:01:50.521307  166103 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:01:50.522795  166103 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-991309" ...
	I0617 12:01:49.197189  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.197671  165698 main.go:141] libmachine: (old-k8s-version-003661) Found IP for machine: 192.168.61.164
	I0617 12:01:49.197697  165698 main.go:141] libmachine: (old-k8s-version-003661) Reserving static IP address...
	I0617 12:01:49.197714  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has current primary IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.198147  165698 main.go:141] libmachine: (old-k8s-version-003661) Reserved static IP address: 192.168.61.164
	I0617 12:01:49.198175  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "old-k8s-version-003661", mac: "52:54:00:76:66:a0", ip: "192.168.61.164"} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.198185  165698 main.go:141] libmachine: (old-k8s-version-003661) Waiting for SSH to be available...
	I0617 12:01:49.198217  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | skip adding static IP to network mk-old-k8s-version-003661 - found existing host DHCP lease matching {name: "old-k8s-version-003661", mac: "52:54:00:76:66:a0", ip: "192.168.61.164"}
	I0617 12:01:49.198227  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Getting to WaitForSSH function...
	I0617 12:01:49.200478  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.200907  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.200935  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.201088  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Using SSH client type: external
	I0617 12:01:49.201116  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa (-rw-------)
	I0617 12:01:49.201154  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:01:49.201169  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | About to run SSH command:
	I0617 12:01:49.201183  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | exit 0
	I0617 12:01:49.323763  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | SSH cmd err, output: <nil>: 
	I0617 12:01:49.324127  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetConfigRaw
	I0617 12:01:49.324835  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:49.327217  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.327628  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.327660  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.327891  165698 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/config.json ...
	I0617 12:01:49.328097  165698 machine.go:94] provisionDockerMachine start ...
	I0617 12:01:49.328120  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:49.328365  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.330587  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.330992  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.331033  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.331160  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.331324  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.331490  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.331637  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.331824  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.332037  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.332049  165698 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:01:49.432170  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:01:49.432201  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.432498  165698 buildroot.go:166] provisioning hostname "old-k8s-version-003661"
	I0617 12:01:49.432524  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.432730  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.435845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.436276  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.436317  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.436507  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.436708  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.436909  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.437074  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.437289  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.437496  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.437510  165698 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-003661 && echo "old-k8s-version-003661" | sudo tee /etc/hostname
	I0617 12:01:49.550158  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-003661
	
	I0617 12:01:49.550187  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.553141  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.553509  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.553539  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.553737  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.553943  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.554141  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.554298  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.554520  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:49.554759  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:49.554787  165698 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-003661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-003661/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-003661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:01:49.661049  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:01:49.661079  165698 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:01:49.661106  165698 buildroot.go:174] setting up certificates
	I0617 12:01:49.661115  165698 provision.go:84] configureAuth start
	I0617 12:01:49.661124  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetMachineName
	I0617 12:01:49.661452  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:49.664166  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.664561  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.664591  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.664723  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.666845  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.667114  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.667158  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.667287  165698 provision.go:143] copyHostCerts
	I0617 12:01:49.667377  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:01:49.667387  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:01:49.667440  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:01:49.667561  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:01:49.667571  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:01:49.667594  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:01:49.667649  165698 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:01:49.667656  165698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:01:49.667674  165698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:01:49.667722  165698 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-003661 san=[127.0.0.1 192.168.61.164 localhost minikube old-k8s-version-003661]
	I0617 12:01:49.853671  165698 provision.go:177] copyRemoteCerts
	I0617 12:01:49.853736  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:01:49.853767  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:49.856171  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.856540  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:49.856577  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:49.856737  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:49.857071  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:49.857220  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:49.857360  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:49.938626  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:01:49.964401  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0617 12:01:49.988397  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0617 12:01:50.013356  165698 provision.go:87] duration metric: took 352.227211ms to configureAuth
	I0617 12:01:50.013382  165698 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:01:50.013581  165698 config.go:182] Loaded profile config "old-k8s-version-003661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0617 12:01:50.013689  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.016168  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.016514  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.016548  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.016657  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.016847  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.017025  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.017152  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.017300  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:50.017483  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:50.017505  165698 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:01:50.280037  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:01:50.280065  165698 machine.go:97] duration metric: took 951.954687ms to provisionDockerMachine
	I0617 12:01:50.280076  165698 start.go:293] postStartSetup for "old-k8s-version-003661" (driver="kvm2")
	I0617 12:01:50.280086  165698 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:01:50.280102  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.280467  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:01:50.280506  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.283318  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.283657  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.283684  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.283874  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.284106  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.284279  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.284402  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.362452  165698 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:01:50.366699  165698 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:01:50.366726  165698 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:01:50.366788  165698 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:01:50.366878  165698 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:01:50.367004  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:01:50.376706  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:50.399521  165698 start.go:296] duration metric: took 119.43167ms for postStartSetup
	I0617 12:01:50.399558  165698 fix.go:56] duration metric: took 19.670946478s for fixHost
	I0617 12:01:50.399578  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.402079  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.402465  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.402500  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.402649  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.402835  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.402994  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.403138  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.403321  165698 main.go:141] libmachine: Using SSH client type: native
	I0617 12:01:50.403529  165698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.164 22 <nil> <nil>}
	I0617 12:01:50.403541  165698 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:01:50.500267  165698 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625710.471154465
	
	I0617 12:01:50.500294  165698 fix.go:216] guest clock: 1718625710.471154465
	I0617 12:01:50.500304  165698 fix.go:229] Guest: 2024-06-17 12:01:50.471154465 +0000 UTC Remote: 2024-06-17 12:01:50.399561534 +0000 UTC m=+212.458541959 (delta=71.592931ms)
	I0617 12:01:50.500350  165698 fix.go:200] guest clock delta is within tolerance: 71.592931ms
	I0617 12:01:50.500355  165698 start.go:83] releasing machines lock for "old-k8s-version-003661", held for 19.771784344s
	I0617 12:01:50.500380  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.500648  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:50.503346  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.503749  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.503776  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.503974  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504536  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504676  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .DriverName
	I0617 12:01:50.504750  165698 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:01:50.504801  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.504861  165698 ssh_runner.go:195] Run: cat /version.json
	I0617 12:01:50.504890  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHHostname
	I0617 12:01:50.507577  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.507736  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508013  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.508041  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508176  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:50.508200  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:50.508205  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.508335  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHPort
	I0617 12:01:50.508419  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.508499  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHKeyPath
	I0617 12:01:50.508580  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.508691  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetSSHUsername
	I0617 12:01:50.508717  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.508830  165698 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/old-k8s-version-003661/id_rsa Username:docker}
	I0617 12:01:50.585030  165698 ssh_runner.go:195] Run: systemctl --version
	I0617 12:01:50.612492  165698 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:01:50.765842  165698 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:01:50.773214  165698 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:01:50.773288  165698 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:01:50.793397  165698 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:01:50.793424  165698 start.go:494] detecting cgroup driver to use...
	I0617 12:01:50.793499  165698 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:01:50.811531  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:01:50.826223  165698 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:01:50.826289  165698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:01:50.840517  165698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:01:50.854788  165698 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:01:50.970328  165698 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:01:51.125815  165698 docker.go:233] disabling docker service ...
	I0617 12:01:51.125893  165698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:01:51.146368  165698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:01:51.161459  165698 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:01:51.346032  165698 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:01:51.503395  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:01:51.521021  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:01:51.543851  165698 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0617 12:01:51.543905  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.556230  165698 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:01:51.556309  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.573061  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.588663  165698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:01:51.601086  165698 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:01:51.617347  165698 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:01:51.634502  165698 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:01:51.634635  165698 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:01:51.652813  165698 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:01:51.665145  165698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:51.826713  165698 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:01:51.981094  165698 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:01:51.981186  165698 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:01:51.986026  165698 start.go:562] Will wait 60s for crictl version
	I0617 12:01:51.986091  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:51.990253  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:01:52.032543  165698 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:01:52.032631  165698 ssh_runner.go:195] Run: crio --version
	I0617 12:01:52.063904  165698 ssh_runner.go:195] Run: crio --version
	I0617 12:01:52.097158  165698 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0617 12:01:50.524130  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Start
	I0617 12:01:50.524321  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Ensuring networks are active...
	I0617 12:01:50.524939  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Ensuring network default is active
	I0617 12:01:50.525300  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Ensuring network mk-default-k8s-diff-port-991309 is active
	I0617 12:01:50.527342  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Getting domain xml...
	I0617 12:01:50.528126  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Creating domain...
	I0617 12:01:51.864887  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting to get IP...
	I0617 12:01:51.865835  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:51.866246  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:51.866328  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:51.866228  166802 retry.go:31] will retry after 200.163407ms: waiting for machine to come up
	I0617 12:01:52.067708  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.068164  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.068193  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:52.068119  166802 retry.go:31] will retry after 364.503903ms: waiting for machine to come up
	I0617 12:01:52.098675  165698 main.go:141] libmachine: (old-k8s-version-003661) Calling .GetIP
	I0617 12:01:52.102187  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:52.102572  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:66:a0", ip: ""} in network mk-old-k8s-version-003661: {Iface:virbr3 ExpiryTime:2024-06-17 13:01:41 +0000 UTC Type:0 Mac:52:54:00:76:66:a0 Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:old-k8s-version-003661 Clientid:01:52:54:00:76:66:a0}
	I0617 12:01:52.102603  165698 main.go:141] libmachine: (old-k8s-version-003661) DBG | domain old-k8s-version-003661 has defined IP address 192.168.61.164 and MAC address 52:54:00:76:66:a0 in network mk-old-k8s-version-003661
	I0617 12:01:52.102823  165698 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0617 12:01:52.107573  165698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:52.121312  165698 kubeadm.go:877] updating cluster {Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.164 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:01:52.121448  165698 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0617 12:01:52.121515  165698 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:52.181796  165698 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0617 12:01:52.181891  165698 ssh_runner.go:195] Run: which lz4
	I0617 12:01:52.186827  165698 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:01:52.191806  165698 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:01:52.191875  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0617 12:01:50.116573  165060 node_ready.go:53] node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:52.122162  165060 node_ready.go:53] node "embed-certs-136195" has status "Ready":"False"
	I0617 12:01:53.117556  165060 node_ready.go:49] node "embed-certs-136195" has status "Ready":"True"
	I0617 12:01:53.117589  165060 node_ready.go:38] duration metric: took 7.004769746s for node "embed-certs-136195" to be "Ready" ...
	I0617 12:01:53.117598  165060 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:01:53.125606  165060 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:53.131618  165060 pod_ready.go:92] pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:53.131643  165060 pod_ready.go:81] duration metric: took 6.000929ms for pod "coredns-7db6d8ff4d-9bbjg" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:53.131654  165060 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:52.434791  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.435584  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.435740  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:52.435665  166802 retry.go:31] will retry after 486.514518ms: waiting for machine to come up
	I0617 12:01:52.924190  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.924819  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:52.924845  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:52.924681  166802 retry.go:31] will retry after 520.971301ms: waiting for machine to come up
	I0617 12:01:53.447437  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:53.447965  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:53.447995  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:53.447919  166802 retry.go:31] will retry after 622.761044ms: waiting for machine to come up
	I0617 12:01:54.072700  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.073170  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.073202  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:54.073112  166802 retry.go:31] will retry after 671.940079ms: waiting for machine to come up
	I0617 12:01:54.746830  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.747342  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:54.747372  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:54.747310  166802 retry.go:31] will retry after 734.856022ms: waiting for machine to come up
	I0617 12:01:55.484571  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:55.485127  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:55.485157  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:55.485066  166802 retry.go:31] will retry after 1.198669701s: waiting for machine to come up
	I0617 12:01:56.685201  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:56.685468  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:56.685493  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:56.685440  166802 retry.go:31] will retry after 1.562509853s: waiting for machine to come up
	I0617 12:01:54.026903  165698 crio.go:462] duration metric: took 1.840117639s to copy over tarball
	I0617 12:01:54.027003  165698 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:01:57.049870  165698 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.022814584s)
	I0617 12:01:57.049904  165698 crio.go:469] duration metric: took 3.022967677s to extract the tarball
	I0617 12:01:57.049914  165698 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:01:57.094589  165698 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:01:57.133299  165698 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0617 12:01:57.133331  165698 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.133451  165698 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.133456  165698 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.133477  165698 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.133530  165698 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.133431  165698 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.133626  165698 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.135990  165698 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.135994  165698 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.135985  165698 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.135979  165698 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0617 12:01:57.136041  165698 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.136041  165698 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.289271  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.299061  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.322581  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.336462  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.337619  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.350335  165698 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0617 12:01:57.350395  165698 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.350448  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.357972  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0617 12:01:57.391517  165698 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0617 12:01:57.391563  165698 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.391640  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.419438  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.442111  165698 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0617 12:01:57.442154  165698 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.442200  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.450145  165698 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:01:57.485873  165698 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0617 12:01:57.485922  165698 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0617 12:01:57.485942  165698 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.485957  165698 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.485996  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.486003  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.486053  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0617 12:01:57.490584  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0617 12:01:57.490669  165698 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0617 12:01:57.490714  165698 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0617 12:01:57.490755  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.551564  165698 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0617 12:01:57.551597  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0617 12:01:57.551619  165698 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.551662  165698 ssh_runner.go:195] Run: which crictl
	I0617 12:01:57.660683  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0617 12:01:57.660732  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0617 12:01:57.660799  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0617 12:01:57.660856  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0617 12:01:57.660734  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0617 12:01:57.660903  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0617 12:01:57.660930  165698 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0617 12:01:57.753965  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0617 12:01:57.753981  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0617 12:01:57.754069  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0617 12:01:57.754069  165698 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0617 12:01:57.754146  165698 cache_images.go:92] duration metric: took 620.797178ms to LoadCachedImages
	W0617 12:01:57.754271  165698 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0617 12:01:57.754292  165698 kubeadm.go:928] updating node { 192.168.61.164 8443 v1.20.0 crio true true} ...
	I0617 12:01:57.754415  165698 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-003661 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:01:57.754489  165698 ssh_runner.go:195] Run: crio config
	I0617 12:01:57.807120  165698 cni.go:84] Creating CNI manager for ""
	I0617 12:01:57.807144  165698 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:01:57.807158  165698 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:01:57.807182  165698 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.164 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-003661 NodeName:old-k8s-version-003661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0617 12:01:57.807370  165698 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-003661"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:01:57.807437  165698 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0617 12:01:57.817865  165698 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:01:57.817940  165698 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:01:57.829796  165698 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0617 12:01:57.847758  165698 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:01:57.866182  165698 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0617 12:01:57.884500  165698 ssh_runner.go:195] Run: grep 192.168.61.164	control-plane.minikube.internal$ /etc/hosts
	I0617 12:01:57.888852  165698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:01:57.902176  165698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:01:55.138418  165060 pod_ready.go:102] pod "etcd-embed-certs-136195" in "kube-system" namespace has status "Ready":"False"
	I0617 12:01:55.641014  165060 pod_ready.go:92] pod "etcd-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:55.641047  165060 pod_ready.go:81] duration metric: took 2.509383461s for pod "etcd-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:55.641061  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.151759  165060 pod_ready.go:92] pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.151788  165060 pod_ready.go:81] duration metric: took 510.718192ms for pod "kube-apiserver-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.152027  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.157234  165060 pod_ready.go:92] pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.157260  165060 pod_ready.go:81] duration metric: took 5.220069ms for pod "kube-controller-manager-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.157273  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.161767  165060 pod_ready.go:92] pod "kube-proxy-25d5n" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.161787  165060 pod_ready.go:81] duration metric: took 4.50732ms for pod "kube-proxy-25d5n" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.161796  165060 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.717763  165060 pod_ready.go:92] pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace has status "Ready":"True"
	I0617 12:01:56.717865  165060 pod_ready.go:81] duration metric: took 556.058292ms for pod "kube-scheduler-embed-certs-136195" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:56.717892  165060 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	I0617 12:01:58.249594  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:01:58.250033  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:01:58.250069  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:01:58.250019  166802 retry.go:31] will retry after 2.154567648s: waiting for machine to come up
	I0617 12:02:00.406269  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:00.406668  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:02:00.406702  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:02:00.406615  166802 retry.go:31] will retry after 2.065044206s: waiting for machine to come up
	I0617 12:01:58.049361  165698 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:01:58.067893  165698 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661 for IP: 192.168.61.164
	I0617 12:01:58.067924  165698 certs.go:194] generating shared ca certs ...
	I0617 12:01:58.067945  165698 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:58.068162  165698 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:01:58.068221  165698 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:01:58.068236  165698 certs.go:256] generating profile certs ...
	I0617 12:01:58.068352  165698 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.key
	I0617 12:01:58.068438  165698 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key.6c1f259c
	I0617 12:01:58.068493  165698 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.key
	I0617 12:01:58.068647  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:01:58.068690  165698 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:01:58.068704  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:01:58.068743  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:01:58.068790  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:01:58.068824  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:01:58.068877  165698 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:01:58.069548  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:01:58.109048  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:01:58.134825  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:01:58.159910  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:01:58.191108  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0617 12:01:58.217407  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:01:58.242626  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:01:58.267261  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0617 12:01:58.291562  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:01:58.321848  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:01:58.352361  165698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:01:58.379343  165698 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:01:58.399146  165698 ssh_runner.go:195] Run: openssl version
	I0617 12:01:58.405081  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:01:58.415471  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.420046  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.420099  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:01:58.425886  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:01:58.436575  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:01:58.447166  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.451523  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.451582  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:01:58.457670  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:01:58.468667  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:01:58.479095  165698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.483744  165698 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.483796  165698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:01:58.489520  165698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:01:58.500298  165698 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:01:58.504859  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:01:58.510619  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:01:58.516819  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:01:58.522837  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:01:58.528736  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:01:58.534585  165698 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:01:58.540464  165698 kubeadm.go:391] StartCluster: {Name:old-k8s-version-003661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-003661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.164 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:01:58.540549  165698 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:01:58.540624  165698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:58.583638  165698 cri.go:89] found id: ""
	I0617 12:01:58.583724  165698 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:01:58.594266  165698 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:01:58.594290  165698 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:01:58.594295  165698 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:01:58.594354  165698 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:01:58.604415  165698 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:01:58.605367  165698 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-003661" does not appear in /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:01:58.605949  165698 kubeconfig.go:62] /home/jenkins/minikube-integration/19084-112967/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-003661" cluster setting kubeconfig missing "old-k8s-version-003661" context setting]
	I0617 12:01:58.606833  165698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:01:58.662621  165698 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:01:58.673813  165698 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.164
	I0617 12:01:58.673848  165698 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:01:58.673863  165698 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:01:58.673907  165698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:01:58.712607  165698 cri.go:89] found id: ""
	I0617 12:01:58.712703  165698 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:01:58.731676  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:01:58.741645  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:01:58.741666  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:01:58.741709  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:01:58.750871  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:01:58.750931  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:01:58.760545  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:01:58.769701  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:01:58.769776  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:01:58.779348  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:01:58.788507  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:01:58.788566  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:01:58.799220  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:01:58.808403  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:01:58.808468  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:01:58.818169  165698 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:01:58.828079  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:58.962164  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:59.679319  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:01:59.903216  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:00.026243  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:00.126201  165698 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:00.126314  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:00.627227  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:01.126836  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:01.626524  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:02.126619  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:02.626434  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:01:58.727229  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:01.226021  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:02.473035  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:02.473477  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:02:02.473505  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:02:02.473458  166802 retry.go:31] will retry after 3.132988331s: waiting for machine to come up
	I0617 12:02:05.607981  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:05.608354  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | unable to find current IP address of domain default-k8s-diff-port-991309 in network mk-default-k8s-diff-port-991309
	I0617 12:02:05.608391  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | I0617 12:02:05.608310  166802 retry.go:31] will retry after 3.312972752s: waiting for machine to come up
	I0617 12:02:03.126687  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:03.626469  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:04.126347  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:04.626548  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:05.127142  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:05.626937  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:06.126479  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:06.626466  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:07.126806  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:07.626814  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:03.724216  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:06.224335  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:08.224842  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:10.217135  164809 start.go:364] duration metric: took 54.298812889s to acquireMachinesLock for "no-preload-152830"
	I0617 12:02:10.217192  164809 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:02:10.217204  164809 fix.go:54] fixHost starting: 
	I0617 12:02:10.217633  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:10.217673  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:10.238636  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44149
	I0617 12:02:10.239091  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:10.239596  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:02:10.239622  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:10.239997  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:10.240214  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:10.240397  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:02:10.242141  164809 fix.go:112] recreateIfNeeded on no-preload-152830: state=Stopped err=<nil>
	I0617 12:02:10.242162  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	W0617 12:02:10.242324  164809 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:02:10.244888  164809 out.go:177] * Restarting existing kvm2 VM for "no-preload-152830" ...
	I0617 12:02:08.922547  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.922966  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Found IP for machine: 192.168.50.125
	I0617 12:02:08.922996  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Reserving static IP address...
	I0617 12:02:08.923013  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has current primary IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.923437  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-991309", mac: "52:54:00:4e:6e:f5", ip: "192.168.50.125"} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:08.923484  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Reserved static IP address: 192.168.50.125
	I0617 12:02:08.923514  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | skip adding static IP to network mk-default-k8s-diff-port-991309 - found existing host DHCP lease matching {name: "default-k8s-diff-port-991309", mac: "52:54:00:4e:6e:f5", ip: "192.168.50.125"}
	I0617 12:02:08.923533  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Getting to WaitForSSH function...
	I0617 12:02:08.923550  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Waiting for SSH to be available...
	I0617 12:02:08.925667  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.926017  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:08.926050  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:08.926203  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Using SSH client type: external
	I0617 12:02:08.926228  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa (-rw-------)
	I0617 12:02:08.926269  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:02:08.926290  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | About to run SSH command:
	I0617 12:02:08.926316  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | exit 0
	I0617 12:02:09.051973  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | SSH cmd err, output: <nil>: 
	I0617 12:02:09.052329  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetConfigRaw
	I0617 12:02:09.052946  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:09.055156  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.055509  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.055541  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.055748  166103 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/config.json ...
	I0617 12:02:09.055940  166103 machine.go:94] provisionDockerMachine start ...
	I0617 12:02:09.055960  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:09.056162  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.058451  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.058826  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.058860  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.058961  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.059155  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.059289  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.059440  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.059583  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.059796  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.059813  166103 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:02:09.163974  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:02:09.164020  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetMachineName
	I0617 12:02:09.164281  166103 buildroot.go:166] provisioning hostname "default-k8s-diff-port-991309"
	I0617 12:02:09.164312  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetMachineName
	I0617 12:02:09.164499  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.167194  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.167606  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.167632  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.167856  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.168097  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.168285  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.168414  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.168571  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.168795  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.168811  166103 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-991309 && echo "default-k8s-diff-port-991309" | sudo tee /etc/hostname
	I0617 12:02:09.290435  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-991309
	
	I0617 12:02:09.290470  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.293538  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.293879  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.293902  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.294132  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.294361  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.294574  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.294753  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.294943  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.295188  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.295209  166103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-991309' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-991309/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-991309' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:02:09.408702  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:02:09.408736  166103 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:02:09.408777  166103 buildroot.go:174] setting up certificates
	I0617 12:02:09.408789  166103 provision.go:84] configureAuth start
	I0617 12:02:09.408798  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetMachineName
	I0617 12:02:09.409122  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:09.411936  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.412304  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.412335  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.412522  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.414598  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.414914  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.414942  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.415054  166103 provision.go:143] copyHostCerts
	I0617 12:02:09.415121  166103 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:02:09.415132  166103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:02:09.415182  166103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:02:09.415264  166103 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:02:09.415271  166103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:02:09.415290  166103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:02:09.415344  166103 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:02:09.415353  166103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:02:09.415378  166103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:02:09.415439  166103 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-991309 san=[127.0.0.1 192.168.50.125 default-k8s-diff-port-991309 localhost minikube]
	I0617 12:02:09.534010  166103 provision.go:177] copyRemoteCerts
	I0617 12:02:09.534082  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:02:09.534121  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.536707  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.537143  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.537176  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.537352  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.537516  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.537687  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.537840  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:09.622292  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0617 12:02:09.652653  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:02:09.676801  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:02:09.700701  166103 provision.go:87] duration metric: took 291.898478ms to configureAuth
	I0617 12:02:09.700734  166103 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:02:09.700931  166103 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:02:09.701023  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.703710  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.704138  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.704171  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.704330  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.704537  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.704730  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.704895  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.705058  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:09.705243  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:09.705262  166103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:02:09.974077  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:02:09.974109  166103 machine.go:97] duration metric: took 918.156221ms to provisionDockerMachine
	I0617 12:02:09.974120  166103 start.go:293] postStartSetup for "default-k8s-diff-port-991309" (driver="kvm2")
	I0617 12:02:09.974131  166103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:02:09.974155  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:09.974502  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:02:09.974544  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:09.977677  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.978073  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:09.978097  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:09.978225  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:09.978407  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:09.978583  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:09.978734  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:10.067068  166103 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:02:10.071843  166103 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:02:10.071870  166103 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:02:10.071934  166103 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:02:10.072024  166103 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:02:10.072128  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:02:10.082041  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:10.107855  166103 start.go:296] duration metric: took 133.717924ms for postStartSetup
	I0617 12:02:10.107903  166103 fix.go:56] duration metric: took 19.607369349s for fixHost
	I0617 12:02:10.107932  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:10.110742  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.111135  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.111169  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.111294  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:10.111527  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.111674  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.111861  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:10.111980  166103 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:10.112205  166103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0617 12:02:10.112220  166103 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:02:10.216945  166103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625730.186446687
	
	I0617 12:02:10.216973  166103 fix.go:216] guest clock: 1718625730.186446687
	I0617 12:02:10.216983  166103 fix.go:229] Guest: 2024-06-17 12:02:10.186446687 +0000 UTC Remote: 2024-06-17 12:02:10.107909348 +0000 UTC m=+152.716337101 (delta=78.537339ms)
	I0617 12:02:10.217033  166103 fix.go:200] guest clock delta is within tolerance: 78.537339ms
	I0617 12:02:10.217039  166103 start.go:83] releasing machines lock for "default-k8s-diff-port-991309", held for 19.716554323s
	I0617 12:02:10.217073  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.217363  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:10.220429  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.220897  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.220927  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.221083  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.221655  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.221870  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:10.221965  166103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:02:10.222026  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:10.222094  166103 ssh_runner.go:195] Run: cat /version.json
	I0617 12:02:10.222122  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:10.225337  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.225673  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.225710  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.225730  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.226015  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:10.226172  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:10.226202  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.226242  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:10.226363  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:10.226447  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:10.226508  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:10.226591  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:10.226687  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:10.226840  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:10.334316  166103 ssh_runner.go:195] Run: systemctl --version
	I0617 12:02:10.340584  166103 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:02:10.489359  166103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:02:10.497198  166103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:02:10.497267  166103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:02:10.517001  166103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:02:10.517032  166103 start.go:494] detecting cgroup driver to use...
	I0617 12:02:10.517110  166103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:02:10.536520  166103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:02:10.550478  166103 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:02:10.550542  166103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:02:10.564437  166103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:02:10.578554  166103 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:02:10.710346  166103 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:02:10.891637  166103 docker.go:233] disabling docker service ...
	I0617 12:02:10.891694  166103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:02:10.908300  166103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:02:10.921663  166103 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:02:11.062715  166103 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:02:11.201061  166103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:02:11.216120  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:02:11.237213  166103 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:02:11.237286  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.248171  166103 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:02:11.248238  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.259159  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.270217  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.280841  166103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:02:11.291717  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.302084  166103 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.319559  166103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:11.331992  166103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:02:11.342435  166103 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:02:11.342494  166103 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:02:11.357436  166103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:02:11.367406  166103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:11.493416  166103 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:02:11.629980  166103 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:02:11.630055  166103 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:02:11.636456  166103 start.go:562] Will wait 60s for crictl version
	I0617 12:02:11.636540  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:02:11.642817  166103 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:02:11.681563  166103 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:02:11.681655  166103 ssh_runner.go:195] Run: crio --version
	I0617 12:02:11.712576  166103 ssh_runner.go:195] Run: crio --version
	I0617 12:02:11.753826  166103 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:02:11.755256  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetIP
	I0617 12:02:11.758628  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:11.759006  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:11.759041  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:11.759252  166103 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0617 12:02:11.763743  166103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:11.780286  166103 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:02:11.780455  166103 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:02:11.780528  166103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:02:11.819396  166103 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:02:11.819481  166103 ssh_runner.go:195] Run: which lz4
	I0617 12:02:11.824047  166103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 12:02:11.828770  166103 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 12:02:11.828807  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0617 12:02:08.127233  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:08.626498  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:09.126712  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:09.627284  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.126446  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.627249  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:11.126428  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:11.626638  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:12.127091  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:12.627361  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:10.226209  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:12.227824  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:10.246388  164809 main.go:141] libmachine: (no-preload-152830) Calling .Start
	I0617 12:02:10.246608  164809 main.go:141] libmachine: (no-preload-152830) Ensuring networks are active...
	I0617 12:02:10.247397  164809 main.go:141] libmachine: (no-preload-152830) Ensuring network default is active
	I0617 12:02:10.247789  164809 main.go:141] libmachine: (no-preload-152830) Ensuring network mk-no-preload-152830 is active
	I0617 12:02:10.248192  164809 main.go:141] libmachine: (no-preload-152830) Getting domain xml...
	I0617 12:02:10.248869  164809 main.go:141] libmachine: (no-preload-152830) Creating domain...
	I0617 12:02:11.500721  164809 main.go:141] libmachine: (no-preload-152830) Waiting to get IP...
	I0617 12:02:11.501614  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:11.502169  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:11.502254  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:11.502131  166976 retry.go:31] will retry after 281.343691ms: waiting for machine to come up
	I0617 12:02:11.785597  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:11.786047  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:11.786082  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:11.785983  166976 retry.go:31] will retry after 303.221815ms: waiting for machine to come up
	I0617 12:02:12.090367  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:12.090919  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:12.090945  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:12.090826  166976 retry.go:31] will retry after 422.250116ms: waiting for machine to come up
	I0617 12:02:12.514456  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:12.515026  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:12.515055  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:12.515001  166976 retry.go:31] will retry after 513.394077ms: waiting for machine to come up
	I0617 12:02:13.029811  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:13.030495  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:13.030522  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:13.030449  166976 retry.go:31] will retry after 596.775921ms: waiting for machine to come up
	I0617 12:02:13.387031  166103 crio.go:462] duration metric: took 1.563017054s to copy over tarball
	I0617 12:02:13.387108  166103 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 12:02:15.664139  166103 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.276994761s)
	I0617 12:02:15.664177  166103 crio.go:469] duration metric: took 2.277117031s to extract the tarball
	I0617 12:02:15.664188  166103 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 12:02:15.703690  166103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:02:15.757605  166103 crio.go:514] all images are preloaded for cri-o runtime.
	I0617 12:02:15.757634  166103 cache_images.go:84] Images are preloaded, skipping loading
	I0617 12:02:15.757644  166103 kubeadm.go:928] updating node { 192.168.50.125 8444 v1.30.1 crio true true} ...
	I0617 12:02:15.757784  166103 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-991309 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:02:15.757874  166103 ssh_runner.go:195] Run: crio config
	I0617 12:02:15.808350  166103 cni.go:84] Creating CNI manager for ""
	I0617 12:02:15.808380  166103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:15.808397  166103 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:02:15.808434  166103 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.125 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-991309 NodeName:default-k8s-diff-port-991309 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:02:15.808633  166103 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.125
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-991309"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:02:15.808709  166103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:02:15.818891  166103 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:02:15.818964  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:02:15.828584  166103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0617 12:02:15.846044  166103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:02:15.862572  166103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0617 12:02:15.880042  166103 ssh_runner.go:195] Run: grep 192.168.50.125	control-plane.minikube.internal$ /etc/hosts
	I0617 12:02:15.884470  166103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:15.897031  166103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:16.013826  166103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:02:16.030366  166103 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309 for IP: 192.168.50.125
	I0617 12:02:16.030391  166103 certs.go:194] generating shared ca certs ...
	I0617 12:02:16.030408  166103 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:16.030590  166103 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:02:16.030650  166103 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:02:16.030668  166103 certs.go:256] generating profile certs ...
	I0617 12:02:16.030793  166103 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/client.key
	I0617 12:02:16.030876  166103 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/apiserver.key.02769a34
	I0617 12:02:16.030919  166103 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/proxy-client.key
	I0617 12:02:16.031024  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:02:16.031051  166103 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:02:16.031060  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:02:16.031080  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:02:16.031103  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:02:16.031122  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:02:16.031179  166103 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:16.031991  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:02:16.066789  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:02:16.094522  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:02:16.119693  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:02:16.155810  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0617 12:02:16.186788  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:02:16.221221  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:02:16.248948  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:02:16.273404  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:02:16.296958  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:02:16.320047  166103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:02:16.349598  166103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:02:16.367499  166103 ssh_runner.go:195] Run: openssl version
	I0617 12:02:16.373596  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:02:16.384778  166103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:02:16.389521  166103 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:02:16.389574  166103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:02:16.395523  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:02:16.406357  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:02:16.417139  166103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:02:16.421629  166103 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:02:16.421679  166103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:02:16.427323  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:02:16.438649  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:02:16.450042  166103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:16.454587  166103 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:16.454636  166103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:16.460677  166103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:02:16.472886  166103 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:02:16.477630  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:02:16.483844  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:02:16.490123  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:02:16.497606  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:02:16.504066  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:02:16.510597  166103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:02:16.518270  166103 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-991309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-991309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:02:16.518371  166103 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:02:16.518439  166103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:16.569103  166103 cri.go:89] found id: ""
	I0617 12:02:16.569179  166103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:02:16.580328  166103 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:02:16.580353  166103 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:02:16.580360  166103 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:02:16.580409  166103 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:02:16.591277  166103 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:02:16.592450  166103 kubeconfig.go:125] found "default-k8s-diff-port-991309" server: "https://192.168.50.125:8444"
	I0617 12:02:16.594770  166103 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:02:16.605669  166103 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.125
	I0617 12:02:16.605728  166103 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:02:16.605745  166103 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:02:16.605810  166103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:16.654529  166103 cri.go:89] found id: ""
	I0617 12:02:16.654620  166103 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:02:16.672923  166103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:02:16.683485  166103 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:02:16.683514  166103 kubeadm.go:156] found existing configuration files:
	
	I0617 12:02:16.683576  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0617 12:02:16.693533  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:02:16.693614  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:02:16.703670  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0617 12:02:16.716352  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:02:16.716413  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:02:16.729336  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0617 12:02:16.739183  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:02:16.739249  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:02:16.748978  166103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0617 12:02:16.758195  166103 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:02:16.758262  166103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:02:16.767945  166103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:02:16.777773  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:16.919605  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:13.126836  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:13.626460  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.127261  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.627161  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:15.126580  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:15.627082  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:16.127163  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:16.626524  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:17.126469  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:17.626488  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:14.728717  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:17.225452  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:13.629097  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:13.629723  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:13.629826  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:13.629705  166976 retry.go:31] will retry after 588.18471ms: waiting for machine to come up
	I0617 12:02:14.219111  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:14.219672  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:14.219704  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:14.219611  166976 retry.go:31] will retry after 889.359727ms: waiting for machine to come up
	I0617 12:02:15.110916  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:15.111528  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:15.111559  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:15.111473  166976 retry.go:31] will retry after 1.139454059s: waiting for machine to come up
	I0617 12:02:16.252051  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:16.252601  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:16.252636  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:16.252534  166976 retry.go:31] will retry after 1.189357648s: waiting for machine to come up
	I0617 12:02:17.443845  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:17.444370  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:17.444403  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:17.444310  166976 retry.go:31] will retry after 1.614769478s: waiting for machine to come up
	I0617 12:02:18.068811  166103 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.149162388s)
	I0617 12:02:18.068870  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:18.301209  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:18.362153  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:18.454577  166103 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:18.454674  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:18.954929  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.454795  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.505453  166103 api_server.go:72] duration metric: took 1.050874914s to wait for apiserver process to appear ...
	I0617 12:02:19.505490  166103 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:02:19.505518  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:19.506056  166103 api_server.go:269] stopped: https://192.168.50.125:8444/healthz: Get "https://192.168.50.125:8444/healthz": dial tcp 192.168.50.125:8444: connect: connection refused
	I0617 12:02:20.005681  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:22.216162  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:22.216214  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:22.216234  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:22.239561  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:22.239635  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:18.126897  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:18.627145  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.126724  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.626498  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:20.126389  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:20.627190  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:21.126480  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:21.627210  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:22.127273  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:22.626691  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:19.227344  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:21.725689  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:19.061035  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:19.061555  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:19.061588  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:19.061520  166976 retry.go:31] will retry after 2.385838312s: waiting for machine to come up
	I0617 12:02:21.448745  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:21.449239  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:21.449266  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:21.449208  166976 retry.go:31] will retry after 3.308788046s: waiting for machine to come up
	I0617 12:02:22.505636  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:22.509888  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:22.509916  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:23.006285  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:23.011948  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:23.011983  166103 api_server.go:103] status: https://192.168.50.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:23.505640  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:02:23.510358  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 200:
	ok
	I0617 12:02:23.516663  166103 api_server.go:141] control plane version: v1.30.1
	I0617 12:02:23.516686  166103 api_server.go:131] duration metric: took 4.011188976s to wait for apiserver health ...
	I0617 12:02:23.516694  166103 cni.go:84] Creating CNI manager for ""
	I0617 12:02:23.516700  166103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:23.518498  166103 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:02:23.519722  166103 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:02:23.530145  166103 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:02:23.552805  166103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:02:23.564825  166103 system_pods.go:59] 8 kube-system pods found
	I0617 12:02:23.564853  166103 system_pods.go:61] "coredns-7db6d8ff4d-mnw24" [1e6c4ff3-f0dc-43da-abd8-baaed7dca40c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0617 12:02:23.564863  166103 system_pods.go:61] "etcd-default-k8s-diff-port-991309" [820a4f27-cf83-4edb-a2ea-edba6673d851] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0617 12:02:23.564871  166103 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991309" [26e6c19d-6f70-4924-83f5-563c8508c9e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 12:02:23.564877  166103 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991309" [01e7c468-98a6-48f3-a158-59e97fa8279c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 12:02:23.564885  166103 system_pods.go:61] "kube-proxy-jn5kp" [d6935148-7ee8-4655-8327-9f1ee4c933de] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0617 12:02:23.564894  166103 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991309" [53ecd22c-05cf-48a5-b7e5-925392085f7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 12:02:23.564899  166103 system_pods.go:61] "metrics-server-569cc877fc-n2svp" [5b637d97-3183-4324-98cf-dd69a2968578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:02:23.564908  166103 system_pods.go:61] "storage-provisioner" [92b20aec-29c2-4256-86be-7f58f66585dd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 12:02:23.564913  166103 system_pods.go:74] duration metric: took 12.089276ms to wait for pod list to return data ...
	I0617 12:02:23.564919  166103 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:02:23.573455  166103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:02:23.573480  166103 node_conditions.go:123] node cpu capacity is 2
	I0617 12:02:23.573492  166103 node_conditions.go:105] duration metric: took 8.568721ms to run NodePressure ...
	I0617 12:02:23.573509  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:23.918292  166103 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 12:02:23.922992  166103 kubeadm.go:733] kubelet initialised
	I0617 12:02:23.923019  166103 kubeadm.go:734] duration metric: took 4.69627ms waiting for restarted kubelet to initialise ...
	I0617 12:02:23.923027  166103 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:23.927615  166103 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.932203  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.932225  166103 pod_ready.go:81] duration metric: took 4.590359ms for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.932233  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.932239  166103 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.936802  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.936825  166103 pod_ready.go:81] duration metric: took 4.579036ms for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.936835  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.936840  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.942877  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.942903  166103 pod_ready.go:81] duration metric: took 6.055748ms for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.942927  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.942935  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:23.955830  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.955851  166103 pod_ready.go:81] duration metric: took 12.903911ms for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:23.955861  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.955869  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:24.356654  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-proxy-jn5kp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.356682  166103 pod_ready.go:81] duration metric: took 400.805294ms for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:24.356692  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-proxy-jn5kp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.356699  166103 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:24.765108  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.765133  166103 pod_ready.go:81] duration metric: took 408.42568ms for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:24.765145  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:24.765152  166103 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:25.156898  166103 pod_ready.go:97] node "default-k8s-diff-port-991309" hosting pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:25.156927  166103 pod_ready.go:81] duration metric: took 391.769275ms for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:25.156939  166103 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991309" hosting pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:25.156946  166103 pod_ready.go:38] duration metric: took 1.233911476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:25.156968  166103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:02:25.170925  166103 ops.go:34] apiserver oom_adj: -16
	I0617 12:02:25.170963  166103 kubeadm.go:591] duration metric: took 8.590593327s to restartPrimaryControlPlane
	I0617 12:02:25.170976  166103 kubeadm.go:393] duration metric: took 8.652716269s to StartCluster
	I0617 12:02:25.170998  166103 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:25.171111  166103 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:02:25.173919  166103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:25.174286  166103 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.125 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:02:25.176186  166103 out.go:177] * Verifying Kubernetes components...
	I0617 12:02:25.174347  166103 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:02:25.174528  166103 config.go:182] Loaded profile config "default-k8s-diff-port-991309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:02:25.177622  166103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:25.177632  166103 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-991309"
	I0617 12:02:25.177670  166103 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-991309"
	W0617 12:02:25.177684  166103 addons.go:243] addon metrics-server should already be in state true
	I0617 12:02:25.177721  166103 host.go:66] Checking if "default-k8s-diff-port-991309" exists ...
	I0617 12:02:25.177622  166103 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-991309"
	I0617 12:02:25.177789  166103 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-991309"
	W0617 12:02:25.177806  166103 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:02:25.177837  166103 host.go:66] Checking if "default-k8s-diff-port-991309" exists ...
	I0617 12:02:25.177628  166103 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-991309"
	I0617 12:02:25.177875  166103 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-991309"
	I0617 12:02:25.178173  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.178202  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.178251  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.178282  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.178299  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.178318  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.198817  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0617 12:02:25.199064  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0617 12:02:25.199513  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I0617 12:02:25.199902  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.199919  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.200633  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.201080  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.201110  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.201270  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.201286  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.201415  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.201427  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.201482  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.201786  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.201845  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.202268  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.202637  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.202663  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.202989  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.203038  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.206439  166103 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-991309"
	W0617 12:02:25.206462  166103 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:02:25.206492  166103 host.go:66] Checking if "default-k8s-diff-port-991309" exists ...
	I0617 12:02:25.206875  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.206921  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.218501  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37189
	I0617 12:02:25.218532  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34089
	I0617 12:02:25.218912  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.218986  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.219410  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.219429  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.219545  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.219561  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.219917  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.219920  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.220110  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.220111  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.221839  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:25.223920  166103 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:02:25.225213  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:02:25.225232  166103 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:02:25.225260  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:25.224029  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:25.228780  166103 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:25.227545  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0617 12:02:25.230084  166103 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:02:25.230100  166103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:02:25.230113  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:25.228465  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.229054  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:25.230179  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:25.229303  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.230215  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.230371  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:25.230542  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:25.230674  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:25.230723  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.230737  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.231150  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.231772  166103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:02:25.231802  166103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:02:25.234036  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.234476  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:25.234494  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.234755  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:25.234919  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:25.235079  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:25.235235  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:25.248352  166103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46349
	I0617 12:02:25.248851  166103 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:02:25.249306  166103 main.go:141] libmachine: Using API Version  1
	I0617 12:02:25.249330  166103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:02:25.249681  166103 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:02:25.249873  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetState
	I0617 12:02:25.251282  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .DriverName
	I0617 12:02:25.251512  166103 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:02:25.251529  166103 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:02:25.251551  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHHostname
	I0617 12:02:25.253963  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.254458  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:f5", ip: ""} in network mk-default-k8s-diff-port-991309: {Iface:virbr1 ExpiryTime:2024-06-17 13:02:02 +0000 UTC Type:0 Mac:52:54:00:4e:6e:f5 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:default-k8s-diff-port-991309 Clientid:01:52:54:00:4e:6e:f5}
	I0617 12:02:25.254484  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | domain default-k8s-diff-port-991309 has defined IP address 192.168.50.125 and MAC address 52:54:00:4e:6e:f5 in network mk-default-k8s-diff-port-991309
	I0617 12:02:25.254628  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHPort
	I0617 12:02:25.254941  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHKeyPath
	I0617 12:02:25.255229  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .GetSSHUsername
	I0617 12:02:25.255385  166103 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/default-k8s-diff-port-991309/id_rsa Username:docker}
	I0617 12:02:25.391207  166103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:02:25.411906  166103 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-991309" to be "Ready" ...
	I0617 12:02:25.476025  166103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:02:25.566470  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:02:25.566500  166103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:02:25.593744  166103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:02:25.620336  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:02:25.620371  166103 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:02:25.700009  166103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:02:25.700048  166103 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:02:25.769841  166103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:02:25.782207  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:25.782240  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:25.782576  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Closing plugin on server side
	I0617 12:02:25.782597  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:25.782610  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:25.782623  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:25.782632  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:25.782888  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:25.782916  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:25.789639  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:25.789662  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:25.789921  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:25.789941  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.600819  166103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.007014283s)
	I0617 12:02:26.600883  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.600898  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.600902  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.600917  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.601253  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601295  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.601305  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.601325  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601342  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.601353  166103 main.go:141] libmachine: Making call to close driver server
	I0617 12:02:26.601366  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.601370  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) Calling .Close
	I0617 12:02:26.601571  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Closing plugin on server side
	I0617 12:02:26.601590  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601600  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.601615  166103 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-991309"
	I0617 12:02:26.601626  166103 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:02:26.601635  166103 main.go:141] libmachine: (default-k8s-diff-port-991309) DBG | Closing plugin on server side
	I0617 12:02:26.601638  166103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:02:26.604200  166103 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0617 12:02:26.605477  166103 addons.go:510] duration metric: took 1.431148263s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0617 12:02:27.415122  166103 node_ready.go:53] node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:23.126888  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:23.627274  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.127019  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.627337  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:25.126642  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:25.627064  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:26.126606  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:26.626803  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:27.126825  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:27.626799  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:24.223344  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:26.225129  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:24.760577  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:24.761063  164809 main.go:141] libmachine: (no-preload-152830) DBG | unable to find current IP address of domain no-preload-152830 in network mk-no-preload-152830
	I0617 12:02:24.761095  164809 main.go:141] libmachine: (no-preload-152830) DBG | I0617 12:02:24.760999  166976 retry.go:31] will retry after 3.793168135s: waiting for machine to come up
	I0617 12:02:28.558153  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.558708  164809 main.go:141] libmachine: (no-preload-152830) Found IP for machine: 192.168.39.173
	I0617 12:02:28.558735  164809 main.go:141] libmachine: (no-preload-152830) Reserving static IP address...
	I0617 12:02:28.558751  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has current primary IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.559214  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "no-preload-152830", mac: "52:54:00:c0:1a:fb", ip: "192.168.39.173"} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.559248  164809 main.go:141] libmachine: (no-preload-152830) DBG | skip adding static IP to network mk-no-preload-152830 - found existing host DHCP lease matching {name: "no-preload-152830", mac: "52:54:00:c0:1a:fb", ip: "192.168.39.173"}
	I0617 12:02:28.559263  164809 main.go:141] libmachine: (no-preload-152830) Reserved static IP address: 192.168.39.173
	I0617 12:02:28.559278  164809 main.go:141] libmachine: (no-preload-152830) Waiting for SSH to be available...
	I0617 12:02:28.559295  164809 main.go:141] libmachine: (no-preload-152830) DBG | Getting to WaitForSSH function...
	I0617 12:02:28.562122  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.562453  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.562482  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.562678  164809 main.go:141] libmachine: (no-preload-152830) DBG | Using SSH client type: external
	I0617 12:02:28.562706  164809 main.go:141] libmachine: (no-preload-152830) DBG | Using SSH private key: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa (-rw-------)
	I0617 12:02:28.562739  164809 main.go:141] libmachine: (no-preload-152830) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.173 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0617 12:02:28.562753  164809 main.go:141] libmachine: (no-preload-152830) DBG | About to run SSH command:
	I0617 12:02:28.562770  164809 main.go:141] libmachine: (no-preload-152830) DBG | exit 0
	I0617 12:02:28.687683  164809 main.go:141] libmachine: (no-preload-152830) DBG | SSH cmd err, output: <nil>: 
	I0617 12:02:28.688021  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetConfigRaw
	I0617 12:02:28.688649  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:28.691248  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.691585  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.691609  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.691895  164809 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/config.json ...
	I0617 12:02:28.692109  164809 machine.go:94] provisionDockerMachine start ...
	I0617 12:02:28.692132  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:28.692371  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:28.694371  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.694738  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.694766  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.694942  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:28.695130  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.695309  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.695490  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:28.695695  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:28.695858  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:28.695869  164809 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:02:28.803687  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 12:02:28.803726  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:02:28.803996  164809 buildroot.go:166] provisioning hostname "no-preload-152830"
	I0617 12:02:28.804031  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:02:28.804333  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:28.806959  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.807395  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.807424  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.807547  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:28.807725  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.807895  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.808057  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:28.808216  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:28.808420  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:28.808436  164809 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-152830 && echo "no-preload-152830" | sudo tee /etc/hostname
	I0617 12:02:28.931222  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-152830
	
	I0617 12:02:28.931259  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:28.934188  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.934536  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:28.934564  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:28.934822  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:28.935048  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.935218  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:28.935353  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:28.935593  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:28.935814  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:28.935837  164809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-152830' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-152830/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-152830' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:02:29.054126  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:02:29.054156  164809 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19084-112967/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-112967/.minikube}
	I0617 12:02:29.054173  164809 buildroot.go:174] setting up certificates
	I0617 12:02:29.054184  164809 provision.go:84] configureAuth start
	I0617 12:02:29.054195  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetMachineName
	I0617 12:02:29.054490  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:29.057394  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.057797  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.057830  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.057963  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.060191  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.060485  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.060514  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.060633  164809 provision.go:143] copyHostCerts
	I0617 12:02:29.060708  164809 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem, removing ...
	I0617 12:02:29.060722  164809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem
	I0617 12:02:29.060796  164809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/key.pem (1679 bytes)
	I0617 12:02:29.060963  164809 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem, removing ...
	I0617 12:02:29.060978  164809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem
	I0617 12:02:29.061003  164809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/ca.pem (1082 bytes)
	I0617 12:02:29.061065  164809 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem, removing ...
	I0617 12:02:29.061072  164809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem
	I0617 12:02:29.061090  164809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-112967/.minikube/cert.pem (1123 bytes)
	I0617 12:02:29.061139  164809 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem org=jenkins.no-preload-152830 san=[127.0.0.1 192.168.39.173 localhost minikube no-preload-152830]
	I0617 12:02:29.321179  164809 provision.go:177] copyRemoteCerts
	I0617 12:02:29.321232  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:02:29.321256  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.324217  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.324612  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.324642  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.324836  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.325043  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.325227  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.325386  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:29.410247  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0617 12:02:29.435763  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0617 12:02:29.462900  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:02:29.491078  164809 provision.go:87] duration metric: took 436.876068ms to configureAuth
	I0617 12:02:29.491120  164809 buildroot.go:189] setting minikube options for container-runtime
	I0617 12:02:29.491377  164809 config.go:182] Loaded profile config "no-preload-152830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:02:29.491522  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.494581  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.495019  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.495052  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.495245  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.495555  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.495766  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.495897  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.496068  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:29.496275  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:29.496296  164809 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0617 12:02:29.774692  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0617 12:02:29.774730  164809 machine.go:97] duration metric: took 1.082604724s to provisionDockerMachine
	I0617 12:02:29.774748  164809 start.go:293] postStartSetup for "no-preload-152830" (driver="kvm2")
	I0617 12:02:29.774765  164809 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:02:29.774785  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:29.775181  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:02:29.775220  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.778574  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.778959  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.778988  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.779154  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.779351  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.779575  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.779750  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:29.866959  164809 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:02:29.871319  164809 info.go:137] Remote host: Buildroot 2023.02.9
	I0617 12:02:29.871348  164809 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/addons for local assets ...
	I0617 12:02:29.871425  164809 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-112967/.minikube/files for local assets ...
	I0617 12:02:29.871535  164809 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem -> 1201742.pem in /etc/ssl/certs
	I0617 12:02:29.871648  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:02:29.881995  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:29.907614  164809 start.go:296] duration metric: took 132.84708ms for postStartSetup
	I0617 12:02:29.907669  164809 fix.go:56] duration metric: took 19.690465972s for fixHost
	I0617 12:02:29.907695  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:29.910226  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.910617  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:29.910644  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:29.910811  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:29.911162  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.911377  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:29.911571  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:29.911772  164809 main.go:141] libmachine: Using SSH client type: native
	I0617 12:02:29.911961  164809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0617 12:02:29.911972  164809 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 12:02:30.021051  164809 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718625749.993041026
	
	I0617 12:02:30.021079  164809 fix.go:216] guest clock: 1718625749.993041026
	I0617 12:02:30.021088  164809 fix.go:229] Guest: 2024-06-17 12:02:29.993041026 +0000 UTC Remote: 2024-06-17 12:02:29.907674102 +0000 UTC m=+356.579226401 (delta=85.366924ms)
	I0617 12:02:30.021113  164809 fix.go:200] guest clock delta is within tolerance: 85.366924ms
	I0617 12:02:30.021120  164809 start.go:83] releasing machines lock for "no-preload-152830", held for 19.803953246s
	I0617 12:02:30.021148  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.021403  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:30.024093  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.024600  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:30.024633  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.024830  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.025380  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.025552  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:02:30.025623  164809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:02:30.025668  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:30.025767  164809 ssh_runner.go:195] Run: cat /version.json
	I0617 12:02:30.025798  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:02:30.028656  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.028826  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.029037  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:30.029068  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.029294  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:30.029336  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:30.029366  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:30.029528  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:30.029536  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:02:30.029764  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:02:30.029776  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:30.029957  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:30.029984  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:02:30.030161  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:02:30.135901  164809 ssh_runner.go:195] Run: systemctl --version
	I0617 12:02:30.142668  164809 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0617 12:02:30.296485  164809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 12:02:30.302789  164809 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 12:02:30.302856  164809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:02:30.319775  164809 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 12:02:30.319793  164809 start.go:494] detecting cgroup driver to use...
	I0617 12:02:30.319894  164809 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 12:02:30.335498  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 12:02:30.349389  164809 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:02:30.349427  164809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:02:30.363086  164809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:02:30.377383  164809 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:02:30.499956  164809 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:02:30.644098  164809 docker.go:233] disabling docker service ...
	I0617 12:02:30.644178  164809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:02:30.661490  164809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:02:30.675856  164809 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:02:30.819937  164809 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:02:30.932926  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:02:30.947638  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:02:30.966574  164809 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0617 12:02:30.966648  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:30.978339  164809 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0617 12:02:30.978416  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:30.989950  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.000644  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.011280  164809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:02:31.022197  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.032780  164809 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.050053  164809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0617 12:02:31.062065  164809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:02:31.073296  164809 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0617 12:02:31.073368  164809 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0617 12:02:31.087733  164809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:02:31.098019  164809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:31.232495  164809 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0617 12:02:31.371236  164809 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0617 12:02:31.371312  164809 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0617 12:02:31.376442  164809 start.go:562] Will wait 60s for crictl version
	I0617 12:02:31.376522  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.380416  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:02:31.426664  164809 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0617 12:02:31.426763  164809 ssh_runner.go:195] Run: crio --version
	I0617 12:02:31.456696  164809 ssh_runner.go:195] Run: crio --version
	I0617 12:02:31.487696  164809 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0617 12:02:29.416369  166103 node_ready.go:53] node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:31.417357  166103 node_ready.go:53] node "default-k8s-diff-port-991309" has status "Ready":"False"
	I0617 12:02:28.126854  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:28.627278  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:29.126577  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:29.626475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:30.127193  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:30.627229  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:31.126478  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:31.626336  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:32.126398  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:32.627005  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:28.724801  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:30.726589  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:33.225707  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:31.488972  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetIP
	I0617 12:02:31.491812  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:31.492191  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:02:31.492220  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:02:31.492411  164809 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0617 12:02:31.497100  164809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:31.510949  164809 kubeadm.go:877] updating cluster {Name:no-preload-152830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-152830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:02:31.511079  164809 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0617 12:02:31.511114  164809 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:02:31.546350  164809 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0617 12:02:31.546377  164809 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 12:02:31.546440  164809 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.546452  164809 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.546478  164809 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.546485  164809 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.546513  164809 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.546513  164809 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.546458  164809 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.546569  164809 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0617 12:02:31.548101  164809 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.548123  164809 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.548123  164809 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.548137  164809 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.548101  164809 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.548104  164809 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0617 12:02:31.548103  164809 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.548427  164809 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.714107  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.714819  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0617 12:02:31.715764  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.721844  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.722172  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.739873  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.746705  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.814194  164809 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0617 12:02:31.814235  164809 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.814273  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.849549  164809 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.950803  164809 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0617 12:02:31.950858  164809 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:31.950907  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.950934  164809 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0617 12:02:31.950959  164809 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:31.950992  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951005  164809 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0617 12:02:31.951030  164809 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0617 12:02:31.951053  164809 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:31.951090  164809 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0617 12:02:31.951103  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951113  164809 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.951146  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951053  164809 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:31.951179  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.951217  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0617 12:02:31.951266  164809 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0617 12:02:31.951289  164809 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:31.951319  164809 ssh_runner.go:195] Run: which crictl
	I0617 12:02:31.967596  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0617 12:02:31.967802  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0617 12:02:32.018505  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:02:32.018542  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0617 12:02:32.018623  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 12:02:32.018664  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0617 12:02:32.018738  164809 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0617 12:02:32.018755  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0617 12:02:32.026154  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0617 12:02:32.026270  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0617 12:02:32.046161  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0617 12:02:32.046288  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0617 12:02:32.126665  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0617 12:02:32.126755  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0617 12:02:32.126765  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0617 12:02:32.126814  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0617 12:02:32.126829  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0617 12:02:32.126867  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0617 12:02:32.126898  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0617 12:02:32.126911  164809 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0617 12:02:32.126935  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0617 12:02:32.126965  164809 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0617 12:02:32.127008  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0617 12:02:32.127058  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0617 12:02:32.127060  164809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0617 12:02:32.142790  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0617 12:02:32.142816  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0617 12:02:32.143132  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0617 12:02:32.915885  166103 node_ready.go:49] node "default-k8s-diff-port-991309" has status "Ready":"True"
	I0617 12:02:32.915912  166103 node_ready.go:38] duration metric: took 7.503979113s for node "default-k8s-diff-port-991309" to be "Ready" ...
	I0617 12:02:32.915924  166103 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:32.921198  166103 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:34.927290  166103 pod_ready.go:102] pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:33.126753  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:33.627017  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:34.126558  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:34.626976  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.126410  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.627309  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:36.126958  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:36.626349  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:37.126815  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:37.627332  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:35.724326  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:37.725145  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:36.125679  164809 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1: (3.998551072s)
	I0617 12:02:36.125727  164809 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0617 12:02:36.125773  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.998809852s)
	I0617 12:02:36.125804  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0617 12:02:36.125838  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0617 12:02:36.125894  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0617 12:02:37.885028  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.759100554s)
	I0617 12:02:37.885054  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0617 12:02:37.885073  164809 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0617 12:02:37.885122  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0617 12:02:37.429419  166103 pod_ready.go:102] pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:39.933476  166103 pod_ready.go:92] pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.933508  166103 pod_ready.go:81] duration metric: took 7.012285571s for pod "coredns-7db6d8ff4d-mnw24" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.933521  166103 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.940139  166103 pod_ready.go:92] pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.940162  166103 pod_ready.go:81] duration metric: took 6.633405ms for pod "etcd-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.940175  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.945285  166103 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.945305  166103 pod_ready.go:81] duration metric: took 5.12303ms for pod "kube-apiserver-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.945317  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.950992  166103 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.951021  166103 pod_ready.go:81] duration metric: took 5.6962ms for pod "kube-controller-manager-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.951034  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.955874  166103 pod_ready.go:92] pod "kube-proxy-jn5kp" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:39.955894  166103 pod_ready.go:81] duration metric: took 4.852842ms for pod "kube-proxy-jn5kp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:39.955905  166103 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:40.327000  166103 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace has status "Ready":"True"
	I0617 12:02:40.327035  166103 pod_ready.go:81] duration metric: took 371.121545ms for pod "kube-scheduler-default-k8s-diff-port-991309" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:40.327049  166103 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:42.334620  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:38.126868  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:38.627367  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.127148  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.626571  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:40.126379  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:40.626747  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:41.126485  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:41.626372  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:42.126904  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:42.627293  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:39.727666  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:42.223700  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:39.992863  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.10770953s)
	I0617 12:02:39.992903  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0617 12:02:39.992934  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0617 12:02:39.992989  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0617 12:02:41.851420  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.858400961s)
	I0617 12:02:41.851452  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0617 12:02:41.851508  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0617 12:02:41.851578  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0617 12:02:44.833842  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:46.834443  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:43.127137  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:43.626521  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.127017  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.626824  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:45.126475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:45.626535  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:46.127423  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:46.626605  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:47.127029  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:47.627431  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:44.224685  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:46.225071  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:44.211669  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.360046418s)
	I0617 12:02:44.211702  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0617 12:02:44.211726  164809 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0617 12:02:44.211795  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0617 12:02:45.162389  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0617 12:02:45.162456  164809 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0617 12:02:45.162542  164809 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0617 12:02:47.414088  164809 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.251500525s)
	I0617 12:02:47.414130  164809 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19084-112967/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0617 12:02:47.414164  164809 cache_images.go:123] Successfully loaded all cached images
	I0617 12:02:47.414172  164809 cache_images.go:92] duration metric: took 15.867782566s to LoadCachedImages
	I0617 12:02:47.414195  164809 kubeadm.go:928] updating node { 192.168.39.173 8443 v1.30.1 crio true true} ...
	I0617 12:02:47.414359  164809 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-152830 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-152830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:02:47.414451  164809 ssh_runner.go:195] Run: crio config
	I0617 12:02:47.466472  164809 cni.go:84] Creating CNI manager for ""
	I0617 12:02:47.466493  164809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:47.466503  164809 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:02:47.466531  164809 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.173 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-152830 NodeName:no-preload-152830 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:02:47.466716  164809 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-152830"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.173"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:02:47.466793  164809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:02:47.478163  164809 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:02:47.478255  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:02:47.488014  164809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0617 12:02:47.505143  164809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:02:47.522481  164809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0617 12:02:47.545714  164809 ssh_runner.go:195] Run: grep 192.168.39.173	control-plane.minikube.internal$ /etc/hosts
	I0617 12:02:47.551976  164809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.173	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:02:47.565374  164809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:02:47.694699  164809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:02:47.714017  164809 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830 for IP: 192.168.39.173
	I0617 12:02:47.714044  164809 certs.go:194] generating shared ca certs ...
	I0617 12:02:47.714064  164809 certs.go:226] acquiring lock for ca certs: {Name:mkc28eb5421bdfb1631820073ca3e7c4e42a3845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:02:47.714260  164809 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key
	I0617 12:02:47.714321  164809 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key
	I0617 12:02:47.714335  164809 certs.go:256] generating profile certs ...
	I0617 12:02:47.714419  164809 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/client.key
	I0617 12:02:47.714504  164809 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/apiserver.key.d2d5b47b
	I0617 12:02:47.714547  164809 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/proxy-client.key
	I0617 12:02:47.714655  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem (1338 bytes)
	W0617 12:02:47.714684  164809 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174_empty.pem, impossibly tiny 0 bytes
	I0617 12:02:47.714693  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:02:47.714719  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/ca.pem (1082 bytes)
	I0617 12:02:47.714745  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:02:47.714780  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/certs/key.pem (1679 bytes)
	I0617 12:02:47.714815  164809 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem (1708 bytes)
	I0617 12:02:47.715578  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:02:47.767301  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0617 12:02:47.804542  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:02:47.842670  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0617 12:02:47.874533  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0617 12:02:47.909752  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:02:47.940097  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:02:47.965441  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:02:47.990862  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:02:48.015935  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/certs/120174.pem --> /usr/share/ca-certificates/120174.pem (1338 bytes)
	I0617 12:02:48.041408  164809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/ssl/certs/1201742.pem --> /usr/share/ca-certificates/1201742.pem (1708 bytes)
	I0617 12:02:48.066557  164809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:02:48.084630  164809 ssh_runner.go:195] Run: openssl version
	I0617 12:02:48.091098  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120174.pem && ln -fs /usr/share/ca-certificates/120174.pem /etc/ssl/certs/120174.pem"
	I0617 12:02:48.102447  164809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120174.pem
	I0617 12:02:48.107238  164809 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 10:56 /usr/share/ca-certificates/120174.pem
	I0617 12:02:48.107299  164809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120174.pem
	I0617 12:02:48.113682  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120174.pem /etc/ssl/certs/51391683.0"
	I0617 12:02:48.124472  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1201742.pem && ln -fs /usr/share/ca-certificates/1201742.pem /etc/ssl/certs/1201742.pem"
	I0617 12:02:48.135897  164809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1201742.pem
	I0617 12:02:48.140859  164809 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 10:56 /usr/share/ca-certificates/1201742.pem
	I0617 12:02:48.140915  164809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1201742.pem
	I0617 12:02:48.147113  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1201742.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:02:48.158192  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:02:48.169483  164809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:48.174241  164809 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:48.174294  164809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:02:48.180093  164809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:02:48.191082  164809 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:02:48.195770  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:02:48.201743  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:02:48.207452  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:02:48.213492  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:02:48.219435  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:02:48.226202  164809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:02:48.232291  164809 kubeadm.go:391] StartCluster: {Name:no-preload-152830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-152830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:02:48.232409  164809 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0617 12:02:48.232448  164809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:48.272909  164809 cri.go:89] found id: ""
	I0617 12:02:48.272972  164809 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:02:48.284185  164809 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:02:48.284212  164809 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:02:48.284221  164809 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:02:48.284266  164809 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:02:48.294653  164809 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:02:48.296091  164809 kubeconfig.go:125] found "no-preload-152830" server: "https://192.168.39.173:8443"
	I0617 12:02:48.298438  164809 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:02:48.307905  164809 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.173
	I0617 12:02:48.307932  164809 kubeadm.go:1154] stopping kube-system containers ...
	I0617 12:02:48.307945  164809 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0617 12:02:48.307990  164809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:02:48.356179  164809 cri.go:89] found id: ""
	I0617 12:02:48.356247  164809 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 12:02:49.333637  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:51.333927  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:48.127215  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:48.627013  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:49.126439  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:49.626831  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.126521  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.627178  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:51.126830  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:51.627091  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:52.127343  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:52.626635  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:48.724828  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:51.225321  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:48.377824  164809 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:02:48.389213  164809 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:02:48.389236  164809 kubeadm.go:156] found existing configuration files:
	
	I0617 12:02:48.389287  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:02:48.398559  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:02:48.398605  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:02:48.408243  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:02:48.417407  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:02:48.417451  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:02:48.427333  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:02:48.436224  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:02:48.436278  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:02:48.445378  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:02:48.454119  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:02:48.454170  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:02:48.463097  164809 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:02:48.472479  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:48.584018  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.392310  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.599840  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.662845  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:49.794357  164809 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:02:49.794459  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.295507  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.794968  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:50.832967  164809 api_server.go:72] duration metric: took 1.038610813s to wait for apiserver process to appear ...
	I0617 12:02:50.832993  164809 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:02:50.833017  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:50.833494  164809 api_server.go:269] stopped: https://192.168.39.173:8443/healthz: Get "https://192.168.39.173:8443/healthz": dial tcp 192.168.39.173:8443: connect: connection refused
	I0617 12:02:51.333910  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:53.534213  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:53.534246  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:53.534265  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:53.579857  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0617 12:02:53.579887  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0617 12:02:53.833207  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:53.863430  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:53.863485  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:54.333557  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:54.342474  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0617 12:02:54.342507  164809 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0617 12:02:54.834092  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:02:54.839578  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0617 12:02:54.854075  164809 api_server.go:141] control plane version: v1.30.1
	I0617 12:02:54.854113  164809 api_server.go:131] duration metric: took 4.021112065s to wait for apiserver health ...
	I0617 12:02:54.854124  164809 cni.go:84] Creating CNI manager for ""
	I0617 12:02:54.854133  164809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:02:54.856029  164809 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:02:53.334898  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:55.834490  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:53.126693  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:53.627110  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:54.126653  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:54.626424  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:55.127113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:55.627373  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:56.126415  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:56.627329  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:57.126797  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:57.627313  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:53.723948  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:56.225000  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:54.857252  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:02:54.914636  164809 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:02:54.961745  164809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:02:54.975140  164809 system_pods.go:59] 8 kube-system pods found
	I0617 12:02:54.975183  164809 system_pods.go:61] "coredns-7db6d8ff4d-7lfns" [83cf7962-1aa7-4de6-9e77-a03dee972ead] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0617 12:02:54.975192  164809 system_pods.go:61] "etcd-no-preload-152830" [27dace2b-9d7d-44e8-8f86-b20ce49c8afa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0617 12:02:54.975202  164809 system_pods.go:61] "kube-apiserver-no-preload-152830" [c102caaf-2289-4171-8b1f-89df4f6edf39] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0617 12:02:54.975213  164809 system_pods.go:61] "kube-controller-manager-no-preload-152830" [534a8f45-7886-4e12-b728-df686c2f8668] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0617 12:02:54.975220  164809 system_pods.go:61] "kube-proxy-bblgc" [70fa474e-cb6a-4e31-b978-78b47e9952a8] Running
	I0617 12:02:54.975228  164809 system_pods.go:61] "kube-scheduler-no-preload-152830" [17d696bd-55b3-4080-a63d-944216adf1d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0617 12:02:54.975240  164809 system_pods.go:61] "metrics-server-569cc877fc-97tqn" [0ce37c88-fd22-4001-96c4-d0f5239c0fd4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:02:54.975253  164809 system_pods.go:61] "storage-provisioner" [61dafb85-965b-4961-b9e1-e3202795caef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0617 12:02:54.975268  164809 system_pods.go:74] duration metric: took 13.492652ms to wait for pod list to return data ...
	I0617 12:02:54.975279  164809 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:02:54.980820  164809 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:02:54.980842  164809 node_conditions.go:123] node cpu capacity is 2
	I0617 12:02:54.980854  164809 node_conditions.go:105] duration metric: took 5.568037ms to run NodePressure ...
	I0617 12:02:54.980873  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 12:02:55.284669  164809 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0617 12:02:55.289433  164809 kubeadm.go:733] kubelet initialised
	I0617 12:02:55.289453  164809 kubeadm.go:734] duration metric: took 4.759785ms waiting for restarted kubelet to initialise ...
	I0617 12:02:55.289461  164809 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:02:55.294149  164809 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:55.298081  164809 pod_ready.go:97] node "no-preload-152830" hosting pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.298100  164809 pod_ready.go:81] duration metric: took 3.929974ms for pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:55.298109  164809 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-152830" hosting pod "coredns-7db6d8ff4d-7lfns" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.298116  164809 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:55.302552  164809 pod_ready.go:97] node "no-preload-152830" hosting pod "etcd-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.302572  164809 pod_ready.go:81] duration metric: took 4.444579ms for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:55.302580  164809 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-152830" hosting pod "etcd-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.302585  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:55.306375  164809 pod_ready.go:97] node "no-preload-152830" hosting pod "kube-apiserver-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.306394  164809 pod_ready.go:81] duration metric: took 3.804134ms for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	E0617 12:02:55.306402  164809 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-152830" hosting pod "kube-apiserver-no-preload-152830" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-152830" has status "Ready":"False"
	I0617 12:02:55.306407  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:02:57.313002  164809 pod_ready.go:102] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:57.834719  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:00.334129  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:58.126744  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:58.627050  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:59.127300  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:02:59.626694  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:00.127092  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:00.127182  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:00.166116  165698 cri.go:89] found id: ""
	I0617 12:03:00.166145  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.166153  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:00.166159  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:00.166208  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:00.200990  165698 cri.go:89] found id: ""
	I0617 12:03:00.201020  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.201029  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:00.201034  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:00.201086  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:00.236394  165698 cri.go:89] found id: ""
	I0617 12:03:00.236422  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.236430  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:00.236438  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:00.236496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:00.274257  165698 cri.go:89] found id: ""
	I0617 12:03:00.274285  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.274293  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:00.274299  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:00.274350  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:00.307425  165698 cri.go:89] found id: ""
	I0617 12:03:00.307452  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.307481  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:00.307490  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:00.307557  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:00.343420  165698 cri.go:89] found id: ""
	I0617 12:03:00.343446  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.343472  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:00.343480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:00.343541  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:00.378301  165698 cri.go:89] found id: ""
	I0617 12:03:00.378325  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.378333  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:00.378338  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:00.378383  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:00.414985  165698 cri.go:89] found id: ""
	I0617 12:03:00.415011  165698 logs.go:276] 0 containers: []
	W0617 12:03:00.415018  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:00.415033  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:00.415090  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:00.468230  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:00.468262  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:00.481970  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:00.481998  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:00.612881  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:00.612911  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:00.612929  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:00.676110  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:00.676145  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:02:58.725617  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:01.225227  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:02:59.818063  164809 pod_ready.go:102] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:02.312898  164809 pod_ready.go:102] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:03.313300  164809 pod_ready.go:92] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:03:03.313332  164809 pod_ready.go:81] duration metric: took 8.006915719s for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:03.313347  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bblgc" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:03.319094  164809 pod_ready.go:92] pod "kube-proxy-bblgc" in "kube-system" namespace has status "Ready":"True"
	I0617 12:03:03.319116  164809 pod_ready.go:81] duration metric: took 5.762584ms for pod "kube-proxy-bblgc" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:03.319137  164809 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:02.833031  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:04.834158  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:07.334894  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:03.216960  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:03.231208  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:03.231277  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:03.267056  165698 cri.go:89] found id: ""
	I0617 12:03:03.267088  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.267096  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:03.267103  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:03.267152  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:03.302797  165698 cri.go:89] found id: ""
	I0617 12:03:03.302832  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.302844  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:03.302852  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:03.302905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:03.343401  165698 cri.go:89] found id: ""
	I0617 12:03:03.343435  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.343445  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:03.343465  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:03.343530  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:03.380841  165698 cri.go:89] found id: ""
	I0617 12:03:03.380871  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.380883  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:03.380890  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:03.380951  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:03.420098  165698 cri.go:89] found id: ""
	I0617 12:03:03.420130  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.420142  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:03.420150  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:03.420213  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:03.458476  165698 cri.go:89] found id: ""
	I0617 12:03:03.458506  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.458515  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:03.458521  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:03.458586  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:03.497127  165698 cri.go:89] found id: ""
	I0617 12:03:03.497156  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.497164  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:03.497170  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:03.497217  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:03.538759  165698 cri.go:89] found id: ""
	I0617 12:03:03.538794  165698 logs.go:276] 0 containers: []
	W0617 12:03:03.538806  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:03.538825  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:03.538841  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:03.584701  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:03.584743  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:03.636981  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:03.637030  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:03.670032  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:03.670077  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:03.757012  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:03.757038  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:03.757056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:06.327680  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:06.341998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:06.342068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:06.383353  165698 cri.go:89] found id: ""
	I0617 12:03:06.383385  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.383394  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:06.383400  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:06.383448  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:06.418806  165698 cri.go:89] found id: ""
	I0617 12:03:06.418850  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.418862  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:06.418870  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:06.418945  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:06.458151  165698 cri.go:89] found id: ""
	I0617 12:03:06.458192  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.458204  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:06.458219  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:06.458289  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:06.496607  165698 cri.go:89] found id: ""
	I0617 12:03:06.496637  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.496645  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:06.496651  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:06.496703  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:06.534900  165698 cri.go:89] found id: ""
	I0617 12:03:06.534938  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.534951  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:06.534959  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:06.535017  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:06.572388  165698 cri.go:89] found id: ""
	I0617 12:03:06.572413  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.572422  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:06.572428  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:06.572496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:06.608072  165698 cri.go:89] found id: ""
	I0617 12:03:06.608104  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.608115  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:06.608121  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:06.608175  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:06.647727  165698 cri.go:89] found id: ""
	I0617 12:03:06.647760  165698 logs.go:276] 0 containers: []
	W0617 12:03:06.647772  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:06.647784  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:06.647800  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:06.720887  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:06.720919  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:06.761128  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:06.761153  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:06.815524  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:06.815557  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:06.830275  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:06.830304  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:06.907861  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:03.725650  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:06.225601  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:05.327062  164809 pod_ready.go:102] pod "kube-scheduler-no-preload-152830" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:07.325033  164809 pod_ready.go:92] pod "kube-scheduler-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:03:07.325061  164809 pod_ready.go:81] duration metric: took 4.005914462s for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:07.325072  164809 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace to be "Ready" ...
	I0617 12:03:09.835374  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:12.334481  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:09.408117  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:09.420916  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:09.420978  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:09.453830  165698 cri.go:89] found id: ""
	I0617 12:03:09.453860  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.453870  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:09.453878  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:09.453937  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:09.492721  165698 cri.go:89] found id: ""
	I0617 12:03:09.492756  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.492766  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:09.492775  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:09.492849  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:09.530956  165698 cri.go:89] found id: ""
	I0617 12:03:09.530984  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.530995  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:09.531001  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:09.531067  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:09.571534  165698 cri.go:89] found id: ""
	I0617 12:03:09.571564  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.571576  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:09.571584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:09.571646  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:09.609740  165698 cri.go:89] found id: ""
	I0617 12:03:09.609776  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.609788  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:09.609797  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:09.609864  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:09.649958  165698 cri.go:89] found id: ""
	I0617 12:03:09.649998  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.650010  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:09.650020  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:09.650087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:09.706495  165698 cri.go:89] found id: ""
	I0617 12:03:09.706532  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.706544  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:09.706553  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:09.706638  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:09.742513  165698 cri.go:89] found id: ""
	I0617 12:03:09.742541  165698 logs.go:276] 0 containers: []
	W0617 12:03:09.742549  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:09.742559  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:09.742571  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:09.756470  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:09.756502  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:09.840878  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:09.840897  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:09.840913  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:09.922329  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:09.922370  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:09.967536  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:09.967573  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:12.521031  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:12.534507  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:12.534595  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:12.569895  165698 cri.go:89] found id: ""
	I0617 12:03:12.569930  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.569942  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:12.569950  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:12.570005  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:12.606857  165698 cri.go:89] found id: ""
	I0617 12:03:12.606888  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.606900  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:12.606922  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:12.606998  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:12.640781  165698 cri.go:89] found id: ""
	I0617 12:03:12.640807  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.640818  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:12.640826  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:12.640910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:12.674097  165698 cri.go:89] found id: ""
	I0617 12:03:12.674124  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.674134  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:12.674142  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:12.674201  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:12.708662  165698 cri.go:89] found id: ""
	I0617 12:03:12.708689  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.708699  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:12.708707  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:12.708791  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:12.744891  165698 cri.go:89] found id: ""
	I0617 12:03:12.744927  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.744938  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:12.744947  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:12.745010  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:12.778440  165698 cri.go:89] found id: ""
	I0617 12:03:12.778466  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.778474  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:12.778480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:12.778528  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:12.814733  165698 cri.go:89] found id: ""
	I0617 12:03:12.814762  165698 logs.go:276] 0 containers: []
	W0617 12:03:12.814770  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:12.814780  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:12.814820  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:12.887741  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:12.887762  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:12.887775  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:12.968439  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:12.968476  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:08.725485  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:11.224357  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:09.331004  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:11.331666  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:13.332269  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:14.335086  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:16.836397  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:13.008926  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:13.008955  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:13.060432  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:13.060468  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:15.575450  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:15.589178  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:15.589244  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:15.625554  165698 cri.go:89] found id: ""
	I0617 12:03:15.625589  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.625601  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:15.625608  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:15.625668  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:15.659023  165698 cri.go:89] found id: ""
	I0617 12:03:15.659054  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.659066  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:15.659074  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:15.659138  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:15.693777  165698 cri.go:89] found id: ""
	I0617 12:03:15.693803  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.693811  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:15.693817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:15.693875  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:15.729098  165698 cri.go:89] found id: ""
	I0617 12:03:15.729133  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.729141  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:15.729147  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:15.729194  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:15.762639  165698 cri.go:89] found id: ""
	I0617 12:03:15.762668  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.762679  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:15.762687  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:15.762744  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:15.797446  165698 cri.go:89] found id: ""
	I0617 12:03:15.797475  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.797484  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:15.797489  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:15.797537  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:15.832464  165698 cri.go:89] found id: ""
	I0617 12:03:15.832503  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.832513  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:15.832521  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:15.832579  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:15.867868  165698 cri.go:89] found id: ""
	I0617 12:03:15.867898  165698 logs.go:276] 0 containers: []
	W0617 12:03:15.867906  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:15.867916  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:15.867928  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:15.882151  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:15.882181  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:15.946642  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:15.946666  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:15.946682  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:16.027062  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:16.027098  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:16.082704  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:16.082735  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:13.725854  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:16.225670  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:15.333470  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:17.832368  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:19.334102  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:21.334529  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:18.651554  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:18.665096  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:18.665166  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:18.703099  165698 cri.go:89] found id: ""
	I0617 12:03:18.703127  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.703138  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:18.703147  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:18.703210  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:18.737945  165698 cri.go:89] found id: ""
	I0617 12:03:18.737985  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.737997  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:18.738005  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:18.738079  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:18.777145  165698 cri.go:89] found id: ""
	I0617 12:03:18.777172  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.777181  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:18.777187  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:18.777255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:18.813171  165698 cri.go:89] found id: ""
	I0617 12:03:18.813198  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.813207  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:18.813213  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:18.813270  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:18.854459  165698 cri.go:89] found id: ""
	I0617 12:03:18.854490  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.854501  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:18.854510  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:18.854607  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:18.893668  165698 cri.go:89] found id: ""
	I0617 12:03:18.893703  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.893712  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:18.893718  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:18.893796  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:18.928919  165698 cri.go:89] found id: ""
	I0617 12:03:18.928971  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.928983  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:18.928993  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:18.929068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:18.965770  165698 cri.go:89] found id: ""
	I0617 12:03:18.965800  165698 logs.go:276] 0 containers: []
	W0617 12:03:18.965808  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:18.965817  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:18.965829  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:19.020348  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:19.020392  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:19.034815  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:19.034845  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:19.109617  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:19.109643  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:19.109660  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:19.186843  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:19.186890  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:21.732720  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:21.747032  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:21.747113  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:21.789962  165698 cri.go:89] found id: ""
	I0617 12:03:21.789991  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.789999  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:21.790011  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:21.790066  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:21.833865  165698 cri.go:89] found id: ""
	I0617 12:03:21.833903  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.833913  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:21.833921  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:21.833985  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:21.903891  165698 cri.go:89] found id: ""
	I0617 12:03:21.903929  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.903941  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:21.903950  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:21.904020  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:21.941369  165698 cri.go:89] found id: ""
	I0617 12:03:21.941396  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.941407  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:21.941415  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:21.941473  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:21.977767  165698 cri.go:89] found id: ""
	I0617 12:03:21.977797  165698 logs.go:276] 0 containers: []
	W0617 12:03:21.977808  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:21.977817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:21.977880  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:22.016422  165698 cri.go:89] found id: ""
	I0617 12:03:22.016450  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.016463  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:22.016471  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:22.016536  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:22.056871  165698 cri.go:89] found id: ""
	I0617 12:03:22.056904  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.056914  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:22.056922  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:22.056982  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:22.093244  165698 cri.go:89] found id: ""
	I0617 12:03:22.093288  165698 logs.go:276] 0 containers: []
	W0617 12:03:22.093300  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:22.093313  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:22.093331  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:22.144722  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:22.144756  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:22.159047  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:22.159084  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:22.232077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:22.232100  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:22.232112  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:22.308241  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:22.308276  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:18.724648  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:21.224616  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:19.832543  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:21.838952  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:23.834640  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:26.336770  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:24.851740  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:24.866597  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:24.866659  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:24.902847  165698 cri.go:89] found id: ""
	I0617 12:03:24.902879  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.902892  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:24.902900  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:24.902973  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:24.940042  165698 cri.go:89] found id: ""
	I0617 12:03:24.940079  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.940088  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:24.940094  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:24.940150  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:24.975160  165698 cri.go:89] found id: ""
	I0617 12:03:24.975190  165698 logs.go:276] 0 containers: []
	W0617 12:03:24.975202  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:24.975211  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:24.975280  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:25.012618  165698 cri.go:89] found id: ""
	I0617 12:03:25.012649  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.012657  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:25.012663  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:25.012712  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:25.051166  165698 cri.go:89] found id: ""
	I0617 12:03:25.051210  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.051223  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:25.051230  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:25.051309  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:25.090112  165698 cri.go:89] found id: ""
	I0617 12:03:25.090144  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.090156  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:25.090164  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:25.090230  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:25.133258  165698 cri.go:89] found id: ""
	I0617 12:03:25.133285  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.133294  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:25.133301  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:25.133366  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:25.177445  165698 cri.go:89] found id: ""
	I0617 12:03:25.177473  165698 logs.go:276] 0 containers: []
	W0617 12:03:25.177481  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:25.177490  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:25.177505  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:25.250685  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:25.250710  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:25.250727  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:25.335554  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:25.335586  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:25.377058  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:25.377093  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:25.431425  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:25.431471  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:27.945063  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:27.959396  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:27.959469  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:23.725126  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:26.224114  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:28.224895  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:23.840550  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:26.333142  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:28.334577  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:28.337133  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:30.834142  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:27.994554  165698 cri.go:89] found id: ""
	I0617 12:03:27.994582  165698 logs.go:276] 0 containers: []
	W0617 12:03:27.994591  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:27.994598  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:27.994660  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:28.030168  165698 cri.go:89] found id: ""
	I0617 12:03:28.030200  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.030208  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:28.030215  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:28.030263  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:28.066213  165698 cri.go:89] found id: ""
	I0617 12:03:28.066244  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.066255  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:28.066261  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:28.066322  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:28.102855  165698 cri.go:89] found id: ""
	I0617 12:03:28.102880  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.102888  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:28.102894  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:28.102942  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:28.138698  165698 cri.go:89] found id: ""
	I0617 12:03:28.138734  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.138748  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:28.138755  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:28.138815  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:28.173114  165698 cri.go:89] found id: ""
	I0617 12:03:28.173140  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.173148  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:28.173154  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:28.173213  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:28.208901  165698 cri.go:89] found id: ""
	I0617 12:03:28.208936  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.208947  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:28.208955  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:28.209016  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:28.244634  165698 cri.go:89] found id: ""
	I0617 12:03:28.244667  165698 logs.go:276] 0 containers: []
	W0617 12:03:28.244678  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:28.244687  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:28.244699  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:28.300303  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:28.300336  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:28.314227  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:28.314272  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:28.394322  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:28.394350  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:28.394367  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:28.483381  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:28.483413  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:31.026433  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:31.040820  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:31.040888  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:31.086409  165698 cri.go:89] found id: ""
	I0617 12:03:31.086440  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.086453  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:31.086461  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:31.086548  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:31.122810  165698 cri.go:89] found id: ""
	I0617 12:03:31.122836  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.122843  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:31.122849  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:31.122910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:31.157634  165698 cri.go:89] found id: ""
	I0617 12:03:31.157669  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.157680  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:31.157687  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:31.157750  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:31.191498  165698 cri.go:89] found id: ""
	I0617 12:03:31.191529  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.191541  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:31.191549  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:31.191619  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:31.225575  165698 cri.go:89] found id: ""
	I0617 12:03:31.225599  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.225609  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:31.225616  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:31.225670  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:31.269780  165698 cri.go:89] found id: ""
	I0617 12:03:31.269810  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.269819  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:31.269825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:31.269874  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:31.307689  165698 cri.go:89] found id: ""
	I0617 12:03:31.307717  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.307726  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:31.307733  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:31.307789  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:31.344160  165698 cri.go:89] found id: ""
	I0617 12:03:31.344190  165698 logs.go:276] 0 containers: []
	W0617 12:03:31.344200  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:31.344210  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:31.344223  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:31.397627  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:31.397667  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:31.411316  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:31.411347  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:31.486258  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:31.486280  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:31.486297  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:31.568067  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:31.568106  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:30.725183  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:33.224294  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:30.834377  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:33.333070  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:33.335067  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:35.335626  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:37.336117  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:34.111424  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:34.127178  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:34.127255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:34.165900  165698 cri.go:89] found id: ""
	I0617 12:03:34.165936  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.165947  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:34.165955  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:34.166042  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:34.203556  165698 cri.go:89] found id: ""
	I0617 12:03:34.203588  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.203597  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:34.203606  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:34.203659  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:34.243418  165698 cri.go:89] found id: ""
	I0617 12:03:34.243478  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.243490  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:34.243499  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:34.243661  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:34.281542  165698 cri.go:89] found id: ""
	I0617 12:03:34.281569  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.281577  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:34.281582  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:34.281635  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:34.316304  165698 cri.go:89] found id: ""
	I0617 12:03:34.316333  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.316341  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:34.316347  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:34.316403  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:34.357416  165698 cri.go:89] found id: ""
	I0617 12:03:34.357455  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.357467  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:34.357476  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:34.357547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:34.392069  165698 cri.go:89] found id: ""
	I0617 12:03:34.392101  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.392112  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:34.392120  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:34.392185  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:34.427203  165698 cri.go:89] found id: ""
	I0617 12:03:34.427235  165698 logs.go:276] 0 containers: []
	W0617 12:03:34.427247  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:34.427258  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:34.427317  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:34.441346  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:34.441375  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:34.519306  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:34.519331  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:34.519349  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:34.598802  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:34.598843  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:34.637521  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:34.637554  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:37.191259  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:37.205882  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:37.205947  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:37.242175  165698 cri.go:89] found id: ""
	I0617 12:03:37.242202  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.242209  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:37.242215  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:37.242278  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:37.278004  165698 cri.go:89] found id: ""
	I0617 12:03:37.278029  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.278037  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:37.278043  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:37.278091  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:37.322148  165698 cri.go:89] found id: ""
	I0617 12:03:37.322179  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.322190  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:37.322198  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:37.322259  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:37.358612  165698 cri.go:89] found id: ""
	I0617 12:03:37.358638  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.358649  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:37.358657  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:37.358718  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:37.393070  165698 cri.go:89] found id: ""
	I0617 12:03:37.393104  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.393115  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:37.393123  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:37.393187  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:37.429420  165698 cri.go:89] found id: ""
	I0617 12:03:37.429452  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.429465  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:37.429475  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:37.429541  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:37.464485  165698 cri.go:89] found id: ""
	I0617 12:03:37.464509  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.464518  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:37.464523  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:37.464584  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:37.501283  165698 cri.go:89] found id: ""
	I0617 12:03:37.501308  165698 logs.go:276] 0 containers: []
	W0617 12:03:37.501316  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:37.501326  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:37.501338  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:37.552848  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:37.552889  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:37.566715  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:37.566746  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:37.643560  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:37.643584  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:37.643601  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:37.722895  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:37.722935  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:35.225442  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:37.225962  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:35.836693  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:38.332297  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:39.834655  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:42.333686  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:40.268199  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:40.281832  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:40.281905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:40.317094  165698 cri.go:89] found id: ""
	I0617 12:03:40.317137  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.317150  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:40.317159  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:40.317229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:40.355786  165698 cri.go:89] found id: ""
	I0617 12:03:40.355819  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.355829  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:40.355836  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:40.355903  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:40.394282  165698 cri.go:89] found id: ""
	I0617 12:03:40.394312  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.394323  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:40.394332  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:40.394388  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:40.433773  165698 cri.go:89] found id: ""
	I0617 12:03:40.433806  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.433817  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:40.433825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:40.433875  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:40.469937  165698 cri.go:89] found id: ""
	I0617 12:03:40.469973  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.469985  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:40.469998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:40.470067  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:40.503565  165698 cri.go:89] found id: ""
	I0617 12:03:40.503590  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.503599  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:40.503605  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:40.503654  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:40.538349  165698 cri.go:89] found id: ""
	I0617 12:03:40.538383  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.538394  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:40.538402  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:40.538461  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:40.576036  165698 cri.go:89] found id: ""
	I0617 12:03:40.576066  165698 logs.go:276] 0 containers: []
	W0617 12:03:40.576075  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:40.576085  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:40.576100  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:40.617804  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:40.617833  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:40.668126  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:40.668162  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:40.682618  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:40.682655  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:40.759597  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:40.759619  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:40.759638  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:39.725534  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:42.223320  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:40.336855  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:42.832597  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:44.334430  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:46.835809  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:43.343404  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:43.357886  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:43.357953  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:43.398262  165698 cri.go:89] found id: ""
	I0617 12:03:43.398290  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.398301  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:43.398310  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:43.398370  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:43.432241  165698 cri.go:89] found id: ""
	I0617 12:03:43.432272  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.432280  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:43.432289  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:43.432348  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:43.466210  165698 cri.go:89] found id: ""
	I0617 12:03:43.466234  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.466241  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:43.466247  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:43.466294  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:43.501677  165698 cri.go:89] found id: ""
	I0617 12:03:43.501711  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.501723  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:43.501731  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:43.501793  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:43.541826  165698 cri.go:89] found id: ""
	I0617 12:03:43.541860  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.541870  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:43.541876  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:43.541941  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:43.576940  165698 cri.go:89] found id: ""
	I0617 12:03:43.576962  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.576970  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:43.576975  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:43.577022  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:43.612592  165698 cri.go:89] found id: ""
	I0617 12:03:43.612627  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.612635  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:43.612643  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:43.612694  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:43.647141  165698 cri.go:89] found id: ""
	I0617 12:03:43.647176  165698 logs.go:276] 0 containers: []
	W0617 12:03:43.647188  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:43.647202  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:43.647220  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:43.698248  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:43.698283  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:43.711686  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:43.711714  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:43.787077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:43.787101  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:43.787115  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:43.861417  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:43.861455  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:46.402594  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:46.417108  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:46.417185  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:46.453910  165698 cri.go:89] found id: ""
	I0617 12:03:46.453941  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.453952  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:46.453960  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:46.454020  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:46.487239  165698 cri.go:89] found id: ""
	I0617 12:03:46.487268  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.487280  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:46.487288  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:46.487353  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:46.521824  165698 cri.go:89] found id: ""
	I0617 12:03:46.521850  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.521859  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:46.521866  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:46.521929  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:46.557247  165698 cri.go:89] found id: ""
	I0617 12:03:46.557274  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.557282  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:46.557289  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:46.557350  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:46.600354  165698 cri.go:89] found id: ""
	I0617 12:03:46.600383  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.600393  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:46.600402  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:46.600477  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:46.638153  165698 cri.go:89] found id: ""
	I0617 12:03:46.638180  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.638189  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:46.638197  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:46.638255  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:46.672636  165698 cri.go:89] found id: ""
	I0617 12:03:46.672661  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.672669  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:46.672675  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:46.672721  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:46.706431  165698 cri.go:89] found id: ""
	I0617 12:03:46.706468  165698 logs.go:276] 0 containers: []
	W0617 12:03:46.706481  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:46.706493  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:46.706509  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:46.720796  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:46.720842  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:46.801343  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:46.801365  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:46.801379  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:46.883651  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:46.883696  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:46.928594  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:46.928630  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:44.224037  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:46.224076  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:48.224472  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:45.332811  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:47.832461  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:49.334678  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:51.833994  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:49.480413  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:49.495558  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:49.495656  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:49.533281  165698 cri.go:89] found id: ""
	I0617 12:03:49.533313  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.533323  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:49.533330  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:49.533396  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:49.573430  165698 cri.go:89] found id: ""
	I0617 12:03:49.573457  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.573465  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:49.573472  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:49.573532  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:49.608669  165698 cri.go:89] found id: ""
	I0617 12:03:49.608697  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.608705  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:49.608711  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:49.608767  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:49.643411  165698 cri.go:89] found id: ""
	I0617 12:03:49.643449  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.643481  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:49.643490  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:49.643557  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:49.680039  165698 cri.go:89] found id: ""
	I0617 12:03:49.680071  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.680082  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:49.680090  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:49.680148  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:49.717169  165698 cri.go:89] found id: ""
	I0617 12:03:49.717195  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.717203  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:49.717209  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:49.717262  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:49.754585  165698 cri.go:89] found id: ""
	I0617 12:03:49.754615  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.754625  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:49.754633  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:49.754697  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:49.796040  165698 cri.go:89] found id: ""
	I0617 12:03:49.796074  165698 logs.go:276] 0 containers: []
	W0617 12:03:49.796085  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:49.796097  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:49.796112  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:49.873496  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:49.873530  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:49.873547  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:49.961883  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:49.961925  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:50.002975  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:50.003004  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:50.054185  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:50.054224  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:52.568557  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:52.584264  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:52.584337  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:52.622474  165698 cri.go:89] found id: ""
	I0617 12:03:52.622501  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.622509  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:52.622516  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:52.622566  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:52.661012  165698 cri.go:89] found id: ""
	I0617 12:03:52.661045  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.661057  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:52.661066  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:52.661133  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:52.700950  165698 cri.go:89] found id: ""
	I0617 12:03:52.700986  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.700998  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:52.701006  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:52.701075  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:52.735663  165698 cri.go:89] found id: ""
	I0617 12:03:52.735689  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.735696  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:52.735702  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:52.735768  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:52.776540  165698 cri.go:89] found id: ""
	I0617 12:03:52.776568  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.776580  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:52.776589  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:52.776642  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:52.812439  165698 cri.go:89] found id: ""
	I0617 12:03:52.812474  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.812493  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:52.812503  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:52.812567  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:52.849233  165698 cri.go:89] found id: ""
	I0617 12:03:52.849263  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.849273  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:52.849281  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:52.849343  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:52.885365  165698 cri.go:89] found id: ""
	I0617 12:03:52.885395  165698 logs.go:276] 0 containers: []
	W0617 12:03:52.885406  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:52.885419  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:52.885434  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:52.941521  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:52.941553  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:52.955958  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:52.955997  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:03:50.224702  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:52.724247  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:50.332871  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:52.832386  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:53.834382  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:55.834813  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	W0617 12:03:53.029254  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:53.029278  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:53.029291  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:53.104391  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:53.104425  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:55.648578  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:55.662143  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:55.662205  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:55.697623  165698 cri.go:89] found id: ""
	I0617 12:03:55.697662  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.697674  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:55.697682  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:55.697751  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:55.734132  165698 cri.go:89] found id: ""
	I0617 12:03:55.734171  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.734184  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:55.734192  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:55.734265  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:55.774178  165698 cri.go:89] found id: ""
	I0617 12:03:55.774212  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.774222  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:55.774231  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:55.774296  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:55.816427  165698 cri.go:89] found id: ""
	I0617 12:03:55.816460  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.816471  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:55.816480  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:55.816546  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:55.860413  165698 cri.go:89] found id: ""
	I0617 12:03:55.860446  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.860457  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:55.860465  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:55.860532  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:55.897577  165698 cri.go:89] found id: ""
	I0617 12:03:55.897612  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.897622  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:55.897629  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:55.897682  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:55.934163  165698 cri.go:89] found id: ""
	I0617 12:03:55.934200  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.934212  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:55.934220  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:55.934291  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:55.972781  165698 cri.go:89] found id: ""
	I0617 12:03:55.972827  165698 logs.go:276] 0 containers: []
	W0617 12:03:55.972840  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:55.972852  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:55.972867  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:56.027292  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:56.027332  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:56.042304  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:56.042336  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:56.115129  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:56.115159  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:56.115176  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:56.194161  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:56.194200  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:54.728169  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:57.225361  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:54.837170  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:57.333566  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:58.335846  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:00.833987  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:58.734681  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:58.748467  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:03:58.748534  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:03:58.786191  165698 cri.go:89] found id: ""
	I0617 12:03:58.786221  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.786232  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:03:58.786239  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:03:58.786302  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:03:58.822076  165698 cri.go:89] found id: ""
	I0617 12:03:58.822103  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.822125  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:03:58.822134  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:03:58.822199  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:03:58.858830  165698 cri.go:89] found id: ""
	I0617 12:03:58.858859  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.858867  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:03:58.858873  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:03:58.858927  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:03:58.898802  165698 cri.go:89] found id: ""
	I0617 12:03:58.898830  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.898838  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:03:58.898844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:03:58.898891  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:03:58.933234  165698 cri.go:89] found id: ""
	I0617 12:03:58.933269  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.933281  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:03:58.933289  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:03:58.933355  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:03:58.973719  165698 cri.go:89] found id: ""
	I0617 12:03:58.973753  165698 logs.go:276] 0 containers: []
	W0617 12:03:58.973766  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:03:58.973773  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:03:58.973847  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:03:59.010671  165698 cri.go:89] found id: ""
	I0617 12:03:59.010722  165698 logs.go:276] 0 containers: []
	W0617 12:03:59.010734  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:03:59.010741  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:03:59.010805  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:03:59.047318  165698 cri.go:89] found id: ""
	I0617 12:03:59.047347  165698 logs.go:276] 0 containers: []
	W0617 12:03:59.047359  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:03:59.047372  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:03:59.047389  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:03:59.097778  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:03:59.097815  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:03:59.111615  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:03:59.111646  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:03:59.193172  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:03:59.193195  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:03:59.193207  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:03:59.268147  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:03:59.268182  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:01.807585  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:01.821634  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:01.821694  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:01.857610  165698 cri.go:89] found id: ""
	I0617 12:04:01.857637  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.857647  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:01.857654  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:01.857710  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:01.893229  165698 cri.go:89] found id: ""
	I0617 12:04:01.893253  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.893261  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:01.893267  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:01.893324  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:01.926916  165698 cri.go:89] found id: ""
	I0617 12:04:01.926940  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.926950  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:01.926958  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:01.927017  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:01.961913  165698 cri.go:89] found id: ""
	I0617 12:04:01.961946  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.961957  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:01.961967  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:01.962045  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:01.997084  165698 cri.go:89] found id: ""
	I0617 12:04:01.997111  165698 logs.go:276] 0 containers: []
	W0617 12:04:01.997119  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:01.997125  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:01.997173  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:02.034640  165698 cri.go:89] found id: ""
	I0617 12:04:02.034666  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.034674  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:02.034680  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:02.034744  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:02.085868  165698 cri.go:89] found id: ""
	I0617 12:04:02.085910  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.085920  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:02.085928  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:02.085983  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:02.152460  165698 cri.go:89] found id: ""
	I0617 12:04:02.152487  165698 logs.go:276] 0 containers: []
	W0617 12:04:02.152499  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:02.152513  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:02.152528  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:02.205297  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:02.205344  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:02.222312  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:02.222348  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:02.299934  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:02.299959  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:02.299977  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:02.384008  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:02.384056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:03:59.724730  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:02.227215  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:03:59.833621  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:01.833799  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:02.834076  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:04.836418  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:07.335024  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:04.926889  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:04.940643  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:04.940722  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:04.976246  165698 cri.go:89] found id: ""
	I0617 12:04:04.976275  165698 logs.go:276] 0 containers: []
	W0617 12:04:04.976283  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:04.976289  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:04.976338  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:05.015864  165698 cri.go:89] found id: ""
	I0617 12:04:05.015900  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.015913  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:05.015921  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:05.015985  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:05.054051  165698 cri.go:89] found id: ""
	I0617 12:04:05.054086  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.054099  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:05.054112  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:05.054177  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:05.090320  165698 cri.go:89] found id: ""
	I0617 12:04:05.090358  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.090371  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:05.090380  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:05.090438  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:05.126963  165698 cri.go:89] found id: ""
	I0617 12:04:05.126998  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.127008  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:05.127015  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:05.127087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:05.162565  165698 cri.go:89] found id: ""
	I0617 12:04:05.162600  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.162611  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:05.162620  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:05.162674  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:05.195706  165698 cri.go:89] found id: ""
	I0617 12:04:05.195743  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.195752  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:05.195758  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:05.195826  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:05.236961  165698 cri.go:89] found id: ""
	I0617 12:04:05.236995  165698 logs.go:276] 0 containers: []
	W0617 12:04:05.237006  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:05.237016  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:05.237034  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:05.252754  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:05.252783  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:05.327832  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:05.327870  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:05.327886  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:05.410220  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:05.410271  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:05.451291  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:05.451324  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:04.725172  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:07.223627  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:04.332177  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:06.831712  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:09.834563  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:12.334095  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:08.003058  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:08.016611  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:08.016670  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:08.052947  165698 cri.go:89] found id: ""
	I0617 12:04:08.052984  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.052996  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:08.053004  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:08.053057  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:08.086668  165698 cri.go:89] found id: ""
	I0617 12:04:08.086695  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.086704  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:08.086711  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:08.086773  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:08.127708  165698 cri.go:89] found id: ""
	I0617 12:04:08.127738  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.127746  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:08.127752  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:08.127814  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:08.162930  165698 cri.go:89] found id: ""
	I0617 12:04:08.162959  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.162966  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:08.162973  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:08.163026  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:08.196757  165698 cri.go:89] found id: ""
	I0617 12:04:08.196782  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.196791  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:08.196797  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:08.196851  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:08.229976  165698 cri.go:89] found id: ""
	I0617 12:04:08.230006  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.230016  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:08.230022  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:08.230083  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:08.265969  165698 cri.go:89] found id: ""
	I0617 12:04:08.266000  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.266007  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:08.266013  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:08.266071  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:08.299690  165698 cri.go:89] found id: ""
	I0617 12:04:08.299717  165698 logs.go:276] 0 containers: []
	W0617 12:04:08.299728  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:08.299741  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:08.299761  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:08.353399  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:08.353429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:08.366713  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:08.366739  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:08.442727  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:08.442768  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:08.442786  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:08.527832  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:08.527875  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:11.073616  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:11.087085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:11.087172  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:11.121706  165698 cri.go:89] found id: ""
	I0617 12:04:11.121745  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.121756  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:11.121765  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:11.121839  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:11.157601  165698 cri.go:89] found id: ""
	I0617 12:04:11.157637  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.157648  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:11.157657  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:11.157719  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:11.191929  165698 cri.go:89] found id: ""
	I0617 12:04:11.191963  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.191975  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:11.191983  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:11.192045  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:11.228391  165698 cri.go:89] found id: ""
	I0617 12:04:11.228416  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.228429  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:11.228437  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:11.228497  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:11.261880  165698 cri.go:89] found id: ""
	I0617 12:04:11.261911  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.261924  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:11.261932  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:11.261998  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:11.294615  165698 cri.go:89] found id: ""
	I0617 12:04:11.294663  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.294676  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:11.294684  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:11.294745  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:11.332813  165698 cri.go:89] found id: ""
	I0617 12:04:11.332840  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.332847  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:11.332854  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:11.332911  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:11.369032  165698 cri.go:89] found id: ""
	I0617 12:04:11.369060  165698 logs.go:276] 0 containers: []
	W0617 12:04:11.369068  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:11.369078  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:11.369090  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:11.422522  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:11.422555  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:11.436961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:11.436990  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:11.508679  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:11.508700  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:11.508713  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:11.586574  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:11.586610  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:09.224727  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:11.225763  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:09.330868  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:11.332256  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:14.335171  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:16.836514  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:14.127034  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:14.143228  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:14.143306  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:14.178368  165698 cri.go:89] found id: ""
	I0617 12:04:14.178396  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.178405  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:14.178410  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:14.178459  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:14.209971  165698 cri.go:89] found id: ""
	I0617 12:04:14.210001  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.210010  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:14.210015  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:14.210065  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:14.244888  165698 cri.go:89] found id: ""
	I0617 12:04:14.244922  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.244933  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:14.244940  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:14.244999  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:14.277875  165698 cri.go:89] found id: ""
	I0617 12:04:14.277904  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.277914  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:14.277922  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:14.277983  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:14.312698  165698 cri.go:89] found id: ""
	I0617 12:04:14.312724  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.312733  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:14.312739  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:14.312789  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:14.350952  165698 cri.go:89] found id: ""
	I0617 12:04:14.350977  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.350987  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:14.350993  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:14.351056  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:14.389211  165698 cri.go:89] found id: ""
	I0617 12:04:14.389235  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.389243  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:14.389250  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:14.389297  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:14.426171  165698 cri.go:89] found id: ""
	I0617 12:04:14.426200  165698 logs.go:276] 0 containers: []
	W0617 12:04:14.426211  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:14.426224  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:14.426240  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:14.500403  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:14.500430  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:14.500446  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:14.588041  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:14.588078  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:14.631948  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:14.631987  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:14.681859  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:14.681895  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:17.198754  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:17.212612  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:17.212679  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:17.251011  165698 cri.go:89] found id: ""
	I0617 12:04:17.251041  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.251056  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:17.251065  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:17.251128  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:17.282964  165698 cri.go:89] found id: ""
	I0617 12:04:17.282989  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.282998  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:17.283003  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:17.283060  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:17.315570  165698 cri.go:89] found id: ""
	I0617 12:04:17.315601  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.315622  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:17.315630  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:17.315691  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:17.351186  165698 cri.go:89] found id: ""
	I0617 12:04:17.351212  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.351221  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:17.351228  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:17.351287  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:17.385609  165698 cri.go:89] found id: ""
	I0617 12:04:17.385653  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.385665  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:17.385673  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:17.385741  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:17.423890  165698 cri.go:89] found id: ""
	I0617 12:04:17.423923  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.423935  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:17.423944  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:17.424000  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:17.459543  165698 cri.go:89] found id: ""
	I0617 12:04:17.459575  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.459584  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:17.459592  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:17.459660  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:17.495554  165698 cri.go:89] found id: ""
	I0617 12:04:17.495584  165698 logs.go:276] 0 containers: []
	W0617 12:04:17.495594  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:17.495606  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:17.495632  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:17.547835  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:17.547881  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:17.562391  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:17.562422  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:17.635335  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:17.635368  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:17.635387  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:17.708946  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:17.708988  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:13.724618  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:16.224689  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:13.832533  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:15.833210  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:17.841693  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:19.336775  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:21.835598  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:20.249833  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:20.266234  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:20.266301  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:20.307380  165698 cri.go:89] found id: ""
	I0617 12:04:20.307415  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.307424  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:20.307431  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:20.307508  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:20.347193  165698 cri.go:89] found id: ""
	I0617 12:04:20.347225  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.347235  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:20.347243  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:20.347311  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:20.382673  165698 cri.go:89] found id: ""
	I0617 12:04:20.382711  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.382724  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:20.382732  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:20.382800  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:20.419542  165698 cri.go:89] found id: ""
	I0617 12:04:20.419573  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.419582  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:20.419588  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:20.419652  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:20.454586  165698 cri.go:89] found id: ""
	I0617 12:04:20.454618  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.454629  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:20.454636  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:20.454708  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:20.501094  165698 cri.go:89] found id: ""
	I0617 12:04:20.501123  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.501131  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:20.501137  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:20.501190  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:20.537472  165698 cri.go:89] found id: ""
	I0617 12:04:20.537512  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.537524  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:20.537532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:20.537597  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:20.571477  165698 cri.go:89] found id: ""
	I0617 12:04:20.571509  165698 logs.go:276] 0 containers: []
	W0617 12:04:20.571519  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:20.571532  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:20.571550  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:20.611503  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:20.611540  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:20.663868  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:20.663905  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:20.677679  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:20.677704  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:20.753645  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:20.753663  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:20.753689  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:18.725428  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:21.224314  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:20.333214  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:22.333294  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:24.333835  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:26.335344  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:23.335535  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:23.349700  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:23.349766  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:23.384327  165698 cri.go:89] found id: ""
	I0617 12:04:23.384351  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.384358  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:23.384364  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:23.384417  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:23.427145  165698 cri.go:89] found id: ""
	I0617 12:04:23.427179  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.427190  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:23.427197  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:23.427254  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:23.461484  165698 cri.go:89] found id: ""
	I0617 12:04:23.461511  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.461522  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:23.461532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:23.461600  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:23.501292  165698 cri.go:89] found id: ""
	I0617 12:04:23.501324  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.501334  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:23.501342  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:23.501407  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:23.537605  165698 cri.go:89] found id: ""
	I0617 12:04:23.537639  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.537649  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:23.537654  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:23.537727  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:23.576580  165698 cri.go:89] found id: ""
	I0617 12:04:23.576608  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.576616  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:23.576623  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:23.576685  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:23.613124  165698 cri.go:89] found id: ""
	I0617 12:04:23.613153  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.613161  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:23.613167  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:23.613216  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:23.648662  165698 cri.go:89] found id: ""
	I0617 12:04:23.648688  165698 logs.go:276] 0 containers: []
	W0617 12:04:23.648695  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:23.648705  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:23.648717  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:23.661737  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:23.661762  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:23.732512  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:23.732531  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:23.732547  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:23.810165  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:23.810207  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:23.855099  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:23.855136  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:26.406038  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:26.422243  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:26.422323  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:26.460959  165698 cri.go:89] found id: ""
	I0617 12:04:26.460984  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.460994  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:26.461002  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:26.461078  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:26.498324  165698 cri.go:89] found id: ""
	I0617 12:04:26.498350  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.498362  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:26.498370  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:26.498435  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:26.535299  165698 cri.go:89] found id: ""
	I0617 12:04:26.535335  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.535346  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:26.535354  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:26.535417  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:26.574623  165698 cri.go:89] found id: ""
	I0617 12:04:26.574657  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.574668  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:26.574677  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:26.574738  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:26.611576  165698 cri.go:89] found id: ""
	I0617 12:04:26.611607  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.611615  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:26.611621  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:26.611672  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:26.645664  165698 cri.go:89] found id: ""
	I0617 12:04:26.645692  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.645700  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:26.645706  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:26.645755  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:26.679442  165698 cri.go:89] found id: ""
	I0617 12:04:26.679477  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.679488  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:26.679495  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:26.679544  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:26.713512  165698 cri.go:89] found id: ""
	I0617 12:04:26.713543  165698 logs.go:276] 0 containers: []
	W0617 12:04:26.713551  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:26.713563  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:26.713584  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:26.770823  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:26.770853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:26.784829  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:26.784858  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:26.868457  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:26.868480  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:26.868498  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:26.948522  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:26.948561  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:23.725626  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:26.224874  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:24.830639  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:26.836648  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:28.835682  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:31.335891  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:29.490891  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:29.504202  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:29.504273  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:29.544091  165698 cri.go:89] found id: ""
	I0617 12:04:29.544125  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.544137  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:29.544145  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:29.544203  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:29.581645  165698 cri.go:89] found id: ""
	I0617 12:04:29.581670  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.581679  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:29.581685  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:29.581736  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:29.621410  165698 cri.go:89] found id: ""
	I0617 12:04:29.621437  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.621447  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:29.621455  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:29.621522  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:29.659619  165698 cri.go:89] found id: ""
	I0617 12:04:29.659645  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.659654  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:29.659659  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:29.659718  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:29.698822  165698 cri.go:89] found id: ""
	I0617 12:04:29.698851  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.698859  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:29.698865  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:29.698957  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:29.741648  165698 cri.go:89] found id: ""
	I0617 12:04:29.741673  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.741680  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:29.741686  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:29.741752  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:29.777908  165698 cri.go:89] found id: ""
	I0617 12:04:29.777933  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.777941  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:29.777947  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:29.778013  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:29.812290  165698 cri.go:89] found id: ""
	I0617 12:04:29.812318  165698 logs.go:276] 0 containers: []
	W0617 12:04:29.812328  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:29.812340  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:29.812357  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:29.857527  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:29.857552  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:29.916734  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:29.916776  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:29.930988  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:29.931013  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:30.006055  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:30.006080  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:30.006098  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:32.586549  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:32.600139  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:32.600262  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:32.641527  165698 cri.go:89] found id: ""
	I0617 12:04:32.641554  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.641570  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:32.641579  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:32.641635  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:32.687945  165698 cri.go:89] found id: ""
	I0617 12:04:32.687972  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.687981  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:32.687996  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:32.688068  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:32.725586  165698 cri.go:89] found id: ""
	I0617 12:04:32.725618  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.725629  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:32.725639  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:32.725696  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:32.764042  165698 cri.go:89] found id: ""
	I0617 12:04:32.764090  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.764107  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:32.764115  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:32.764183  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:32.800132  165698 cri.go:89] found id: ""
	I0617 12:04:32.800167  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.800180  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:32.800189  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:32.800256  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:32.840313  165698 cri.go:89] found id: ""
	I0617 12:04:32.840348  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.840359  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:32.840367  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:32.840434  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:32.878041  165698 cri.go:89] found id: ""
	I0617 12:04:32.878067  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.878076  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:32.878082  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:32.878134  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:32.913904  165698 cri.go:89] found id: ""
	I0617 12:04:32.913939  165698 logs.go:276] 0 containers: []
	W0617 12:04:32.913950  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:32.913961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:32.913974  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:04:28.725534  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:31.224885  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:29.330706  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:31.331989  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:33.337062  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:35.834807  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	W0617 12:04:32.987900  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:32.987929  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:32.987947  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:33.060919  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:33.060961  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:33.102602  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:33.102629  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:33.154112  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:33.154161  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:35.669336  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:35.682819  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:35.682907  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:35.717542  165698 cri.go:89] found id: ""
	I0617 12:04:35.717571  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.717579  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:35.717586  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:35.717646  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:35.754454  165698 cri.go:89] found id: ""
	I0617 12:04:35.754483  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.754495  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:35.754503  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:35.754566  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:35.791198  165698 cri.go:89] found id: ""
	I0617 12:04:35.791227  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.791237  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:35.791246  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:35.791309  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:35.826858  165698 cri.go:89] found id: ""
	I0617 12:04:35.826892  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.826903  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:35.826911  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:35.826974  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:35.866817  165698 cri.go:89] found id: ""
	I0617 12:04:35.866845  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.866853  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:35.866861  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:35.866909  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:35.918340  165698 cri.go:89] found id: ""
	I0617 12:04:35.918377  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.918388  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:35.918397  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:35.918466  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:35.960734  165698 cri.go:89] found id: ""
	I0617 12:04:35.960764  165698 logs.go:276] 0 containers: []
	W0617 12:04:35.960774  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:35.960779  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:35.960841  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:36.002392  165698 cri.go:89] found id: ""
	I0617 12:04:36.002426  165698 logs.go:276] 0 containers: []
	W0617 12:04:36.002437  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:36.002449  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:36.002465  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:36.055130  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:36.055163  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:36.069181  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:36.069209  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:36.146078  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:36.146105  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:36.146120  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:36.223763  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:36.223797  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:33.723759  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:35.725954  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:38.225200  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:33.833990  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:36.332152  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:38.332570  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:37.836765  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:40.334594  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:42.336958  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:38.767375  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:38.781301  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:38.781357  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:38.821364  165698 cri.go:89] found id: ""
	I0617 12:04:38.821390  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.821400  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:38.821409  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:38.821472  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:38.860727  165698 cri.go:89] found id: ""
	I0617 12:04:38.860784  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.860796  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:38.860803  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:38.860868  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:38.902932  165698 cri.go:89] found id: ""
	I0617 12:04:38.902968  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.902992  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:38.902999  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:38.903088  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:38.940531  165698 cri.go:89] found id: ""
	I0617 12:04:38.940564  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.940576  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:38.940584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:38.940649  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:38.975751  165698 cri.go:89] found id: ""
	I0617 12:04:38.975792  165698 logs.go:276] 0 containers: []
	W0617 12:04:38.975827  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:38.975835  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:38.975908  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:39.011156  165698 cri.go:89] found id: ""
	I0617 12:04:39.011196  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.011206  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:39.011213  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:39.011269  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:39.049266  165698 cri.go:89] found id: ""
	I0617 12:04:39.049301  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.049312  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:39.049320  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:39.049373  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:39.089392  165698 cri.go:89] found id: ""
	I0617 12:04:39.089425  165698 logs.go:276] 0 containers: []
	W0617 12:04:39.089434  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:39.089444  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:39.089459  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:39.166585  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:39.166607  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:39.166619  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:39.241910  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:39.241950  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:39.287751  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:39.287782  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:39.342226  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:39.342259  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:41.857327  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:41.871379  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:41.871446  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:41.907435  165698 cri.go:89] found id: ""
	I0617 12:04:41.907472  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.907483  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:41.907492  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:41.907542  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:41.941684  165698 cri.go:89] found id: ""
	I0617 12:04:41.941725  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.941737  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:41.941745  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:41.941819  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:41.977359  165698 cri.go:89] found id: ""
	I0617 12:04:41.977395  165698 logs.go:276] 0 containers: []
	W0617 12:04:41.977407  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:41.977415  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:41.977478  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:42.015689  165698 cri.go:89] found id: ""
	I0617 12:04:42.015723  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.015734  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:42.015742  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:42.015803  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:42.050600  165698 cri.go:89] found id: ""
	I0617 12:04:42.050626  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.050637  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:42.050645  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:42.050707  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:42.088174  165698 cri.go:89] found id: ""
	I0617 12:04:42.088201  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.088212  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:42.088221  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:42.088290  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:42.127335  165698 cri.go:89] found id: ""
	I0617 12:04:42.127364  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.127375  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:42.127384  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:42.127443  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:42.163435  165698 cri.go:89] found id: ""
	I0617 12:04:42.163481  165698 logs.go:276] 0 containers: []
	W0617 12:04:42.163492  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:42.163505  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:42.163527  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:42.233233  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:42.233262  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:42.233280  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:42.311695  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:42.311741  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:42.378134  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:42.378163  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:42.439614  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:42.439647  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:40.726373  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:43.225144  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:40.336291  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:42.831220  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:44.835811  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:47.335772  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:44.953738  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:44.967822  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:44.967884  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:45.004583  165698 cri.go:89] found id: ""
	I0617 12:04:45.004687  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.004732  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:45.004741  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:45.004797  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:45.038912  165698 cri.go:89] found id: ""
	I0617 12:04:45.038939  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.038949  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:45.038957  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:45.039026  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:45.073594  165698 cri.go:89] found id: ""
	I0617 12:04:45.073620  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.073628  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:45.073634  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:45.073684  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:45.108225  165698 cri.go:89] found id: ""
	I0617 12:04:45.108253  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.108261  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:45.108267  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:45.108317  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:45.139522  165698 cri.go:89] found id: ""
	I0617 12:04:45.139545  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.139553  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:45.139559  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:45.139609  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:45.173705  165698 cri.go:89] found id: ""
	I0617 12:04:45.173735  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.173745  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:45.173752  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:45.173813  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:45.206448  165698 cri.go:89] found id: ""
	I0617 12:04:45.206477  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.206486  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:45.206493  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:45.206551  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:45.242925  165698 cri.go:89] found id: ""
	I0617 12:04:45.242952  165698 logs.go:276] 0 containers: []
	W0617 12:04:45.242962  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:45.242981  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:45.242998  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:45.294669  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:45.294700  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:45.307642  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:45.307670  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:45.381764  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:45.381788  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:45.381805  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:45.469022  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:45.469056  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:45.724236  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:48.225656  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:45.332888  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:47.832326  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:49.337260  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:51.338718  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:48.014169  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:48.029895  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:48.029984  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:48.086421  165698 cri.go:89] found id: ""
	I0617 12:04:48.086456  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.086468  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:48.086477  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:48.086554  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:48.135673  165698 cri.go:89] found id: ""
	I0617 12:04:48.135705  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.135713  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:48.135733  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:48.135808  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:48.184330  165698 cri.go:89] found id: ""
	I0617 12:04:48.184353  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.184362  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:48.184368  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:48.184418  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:48.221064  165698 cri.go:89] found id: ""
	I0617 12:04:48.221095  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.221103  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:48.221112  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:48.221175  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:48.264464  165698 cri.go:89] found id: ""
	I0617 12:04:48.264495  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.264502  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:48.264508  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:48.264561  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:48.302144  165698 cri.go:89] found id: ""
	I0617 12:04:48.302180  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.302191  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:48.302199  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:48.302263  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:48.345431  165698 cri.go:89] found id: ""
	I0617 12:04:48.345458  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.345465  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:48.345472  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:48.345539  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:48.383390  165698 cri.go:89] found id: ""
	I0617 12:04:48.383423  165698 logs.go:276] 0 containers: []
	W0617 12:04:48.383434  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:48.383447  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:48.383478  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:48.422328  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:48.422356  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:48.473698  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:48.473735  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:48.488399  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:48.488429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:48.566851  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:48.566871  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:48.566884  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:51.149626  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:51.162855  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:51.162926  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:51.199056  165698 cri.go:89] found id: ""
	I0617 12:04:51.199091  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.199102  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:51.199109  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:51.199172  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:51.238773  165698 cri.go:89] found id: ""
	I0617 12:04:51.238810  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.238821  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:51.238827  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:51.238883  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:51.279049  165698 cri.go:89] found id: ""
	I0617 12:04:51.279079  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.279092  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:51.279100  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:51.279166  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:51.324923  165698 cri.go:89] found id: ""
	I0617 12:04:51.324957  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.324969  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:51.324976  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:51.325028  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:51.363019  165698 cri.go:89] found id: ""
	I0617 12:04:51.363055  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.363068  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:51.363077  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:51.363142  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:51.399620  165698 cri.go:89] found id: ""
	I0617 12:04:51.399652  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.399661  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:51.399675  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:51.399758  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:51.434789  165698 cri.go:89] found id: ""
	I0617 12:04:51.434824  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.434836  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:51.434844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:51.434910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:51.470113  165698 cri.go:89] found id: ""
	I0617 12:04:51.470140  165698 logs.go:276] 0 containers: []
	W0617 12:04:51.470149  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:51.470160  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:51.470176  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:51.526138  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:51.526173  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:51.539451  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:51.539491  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:51.613418  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:51.613437  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:51.613450  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:51.691971  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:51.692010  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:50.724405  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:52.725426  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:50.332363  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:52.332932  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:53.834955  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:56.334584  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:54.234514  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:54.249636  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:54.249724  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:54.283252  165698 cri.go:89] found id: ""
	I0617 12:04:54.283287  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.283300  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:54.283307  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:54.283367  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:54.319153  165698 cri.go:89] found id: ""
	I0617 12:04:54.319207  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.319218  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:54.319226  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:54.319290  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:54.361450  165698 cri.go:89] found id: ""
	I0617 12:04:54.361480  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.361491  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:54.361498  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:54.361562  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:54.397806  165698 cri.go:89] found id: ""
	I0617 12:04:54.397834  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.397843  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:54.397849  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:54.397899  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:54.447119  165698 cri.go:89] found id: ""
	I0617 12:04:54.447147  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.447155  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:54.447161  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:54.447211  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:54.489717  165698 cri.go:89] found id: ""
	I0617 12:04:54.489751  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.489760  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:54.489766  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:54.489830  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:54.532840  165698 cri.go:89] found id: ""
	I0617 12:04:54.532943  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.532975  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:54.532989  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:54.533100  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:54.568227  165698 cri.go:89] found id: ""
	I0617 12:04:54.568369  165698 logs.go:276] 0 containers: []
	W0617 12:04:54.568391  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:54.568403  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:54.568420  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:54.583140  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:54.583174  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:54.661258  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:54.661281  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:54.661296  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:54.750472  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:54.750511  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:54.797438  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:54.797467  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:57.349800  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:04:57.364820  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:04:57.364879  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:04:57.405065  165698 cri.go:89] found id: ""
	I0617 12:04:57.405093  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.405101  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:04:57.405106  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:04:57.405153  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:04:57.445707  165698 cri.go:89] found id: ""
	I0617 12:04:57.445741  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.445752  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:04:57.445760  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:04:57.445829  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:04:57.486911  165698 cri.go:89] found id: ""
	I0617 12:04:57.486940  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.486948  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:04:57.486955  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:04:57.487014  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:04:57.521218  165698 cri.go:89] found id: ""
	I0617 12:04:57.521254  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.521266  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:04:57.521274  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:04:57.521342  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:04:57.555762  165698 cri.go:89] found id: ""
	I0617 12:04:57.555794  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.555803  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:04:57.555808  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:04:57.555863  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:04:57.591914  165698 cri.go:89] found id: ""
	I0617 12:04:57.591945  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.591956  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:04:57.591971  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:04:57.592037  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:04:57.626435  165698 cri.go:89] found id: ""
	I0617 12:04:57.626463  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.626471  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:04:57.626477  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:04:57.626527  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:04:57.665088  165698 cri.go:89] found id: ""
	I0617 12:04:57.665118  165698 logs.go:276] 0 containers: []
	W0617 12:04:57.665126  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:04:57.665137  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:04:57.665152  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:57.716284  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:04:57.716316  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:04:57.730179  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:04:57.730204  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:04:57.808904  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:04:57.808933  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:04:57.808954  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:04:57.894499  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:04:57.894530  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:04:55.224507  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:57.224583  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:54.831112  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:56.832477  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:58.334640  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:00.335137  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:00.435957  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:00.450812  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:00.450890  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:00.491404  165698 cri.go:89] found id: ""
	I0617 12:05:00.491432  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.491440  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:00.491446  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:00.491523  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:00.526711  165698 cri.go:89] found id: ""
	I0617 12:05:00.526739  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.526747  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:00.526753  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:00.526817  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:00.562202  165698 cri.go:89] found id: ""
	I0617 12:05:00.562236  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.562246  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:00.562255  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:00.562323  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:00.602754  165698 cri.go:89] found id: ""
	I0617 12:05:00.602790  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.602802  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:00.602811  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:00.602877  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:00.645666  165698 cri.go:89] found id: ""
	I0617 12:05:00.645703  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.645715  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:00.645723  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:00.645788  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:00.684649  165698 cri.go:89] found id: ""
	I0617 12:05:00.684685  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.684694  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:00.684701  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:00.684784  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:00.727139  165698 cri.go:89] found id: ""
	I0617 12:05:00.727160  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.727167  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:00.727173  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:00.727238  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:00.764401  165698 cri.go:89] found id: ""
	I0617 12:05:00.764433  165698 logs.go:276] 0 containers: []
	W0617 12:05:00.764444  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:00.764455  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:00.764474  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:00.777301  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:00.777322  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:00.849752  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:00.849778  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:00.849795  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:00.930220  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:00.930266  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:00.970076  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:00.970116  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:04:59.226429  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:01.725079  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:04:59.337081  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:01.834932  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:02.834132  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:05.334066  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:07.335366  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:03.526070  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:03.541150  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:03.541229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:03.584416  165698 cri.go:89] found id: ""
	I0617 12:05:03.584451  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.584463  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:03.584472  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:03.584535  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:03.623509  165698 cri.go:89] found id: ""
	I0617 12:05:03.623543  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.623552  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:03.623558  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:03.623611  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:03.661729  165698 cri.go:89] found id: ""
	I0617 12:05:03.661765  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.661778  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:03.661787  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:03.661852  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:03.702952  165698 cri.go:89] found id: ""
	I0617 12:05:03.702985  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.703008  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:03.703033  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:03.703100  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:03.746534  165698 cri.go:89] found id: ""
	I0617 12:05:03.746570  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.746578  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:03.746584  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:03.746648  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:03.784472  165698 cri.go:89] found id: ""
	I0617 12:05:03.784506  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.784515  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:03.784522  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:03.784580  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:03.821033  165698 cri.go:89] found id: ""
	I0617 12:05:03.821066  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.821077  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:03.821085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:03.821146  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:03.859438  165698 cri.go:89] found id: ""
	I0617 12:05:03.859474  165698 logs.go:276] 0 containers: []
	W0617 12:05:03.859487  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:03.859497  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:03.859513  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:03.940723  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:03.940770  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:03.986267  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:03.986303  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:04.037999  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:04.038039  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:04.051382  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:04.051415  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:04.121593  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:06.622475  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:06.636761  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:06.636842  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:06.673954  165698 cri.go:89] found id: ""
	I0617 12:05:06.673995  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.674007  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:06.674015  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:06.674084  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:06.708006  165698 cri.go:89] found id: ""
	I0617 12:05:06.708037  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.708047  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:06.708055  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:06.708124  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:06.743819  165698 cri.go:89] found id: ""
	I0617 12:05:06.743852  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.743864  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:06.743872  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:06.743934  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:06.781429  165698 cri.go:89] found id: ""
	I0617 12:05:06.781457  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.781465  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:06.781473  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:06.781540  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:06.818404  165698 cri.go:89] found id: ""
	I0617 12:05:06.818435  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.818447  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:06.818456  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:06.818516  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:06.857880  165698 cri.go:89] found id: ""
	I0617 12:05:06.857913  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.857924  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:06.857933  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:06.857993  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:06.893010  165698 cri.go:89] found id: ""
	I0617 12:05:06.893050  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.893059  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:06.893065  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:06.893118  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:06.926302  165698 cri.go:89] found id: ""
	I0617 12:05:06.926336  165698 logs.go:276] 0 containers: []
	W0617 12:05:06.926347  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:06.926360  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:06.926378  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:06.997173  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:06.997197  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:06.997215  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:07.082843  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:07.082885  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:07.122542  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:07.122572  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:07.177033  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:07.177070  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:03.725338  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:06.225466  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:04.331639  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:06.331988  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:08.332139  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:09.835119  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:12.333346  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:09.693217  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:09.707043  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:09.707110  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:09.742892  165698 cri.go:89] found id: ""
	I0617 12:05:09.742918  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.742927  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:09.742933  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:09.742982  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:09.776938  165698 cri.go:89] found id: ""
	I0617 12:05:09.776969  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.776976  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:09.776982  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:09.777030  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:09.813613  165698 cri.go:89] found id: ""
	I0617 12:05:09.813643  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.813651  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:09.813658  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:09.813705  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:09.855483  165698 cri.go:89] found id: ""
	I0617 12:05:09.855516  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.855525  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:09.855532  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:09.855596  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:09.890808  165698 cri.go:89] found id: ""
	I0617 12:05:09.890844  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.890854  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:09.890862  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:09.890930  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:09.927656  165698 cri.go:89] found id: ""
	I0617 12:05:09.927684  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.927693  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:09.927703  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:09.927758  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:09.968130  165698 cri.go:89] found id: ""
	I0617 12:05:09.968163  165698 logs.go:276] 0 containers: []
	W0617 12:05:09.968174  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:09.968183  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:09.968239  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:10.010197  165698 cri.go:89] found id: ""
	I0617 12:05:10.010220  165698 logs.go:276] 0 containers: []
	W0617 12:05:10.010228  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:10.010239  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:10.010252  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:10.063999  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:10.064040  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:10.078837  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:10.078873  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:10.155932  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:10.155954  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:10.155967  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:10.232859  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:10.232901  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:12.772943  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:12.787936  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:12.788024  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:12.828457  165698 cri.go:89] found id: ""
	I0617 12:05:12.828483  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.828491  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:12.828498  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:12.828562  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:12.862265  165698 cri.go:89] found id: ""
	I0617 12:05:12.862296  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.862306  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:12.862313  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:12.862372  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:12.899673  165698 cri.go:89] found id: ""
	I0617 12:05:12.899698  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.899706  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:12.899712  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:12.899759  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:12.943132  165698 cri.go:89] found id: ""
	I0617 12:05:12.943161  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.943169  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:12.943175  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:12.943227  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:08.724369  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:10.725166  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:13.224799  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:10.333769  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:12.832493  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:14.336437  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:16.835155  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:12.985651  165698 cri.go:89] found id: ""
	I0617 12:05:12.985677  165698 logs.go:276] 0 containers: []
	W0617 12:05:12.985685  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:12.985691  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:12.985747  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:13.021484  165698 cri.go:89] found id: ""
	I0617 12:05:13.021508  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.021516  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:13.021522  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:13.021569  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:13.060658  165698 cri.go:89] found id: ""
	I0617 12:05:13.060689  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.060705  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:13.060713  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:13.060782  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:13.106008  165698 cri.go:89] found id: ""
	I0617 12:05:13.106041  165698 logs.go:276] 0 containers: []
	W0617 12:05:13.106052  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:13.106066  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:13.106083  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:13.160199  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:13.160231  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:13.173767  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:13.173804  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:13.245358  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:13.245383  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:13.245399  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:13.323046  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:13.323085  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:15.872024  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:15.885550  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:15.885624  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:15.920303  165698 cri.go:89] found id: ""
	I0617 12:05:15.920332  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.920344  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:15.920358  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:15.920423  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:15.955132  165698 cri.go:89] found id: ""
	I0617 12:05:15.955158  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.955166  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:15.955172  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:15.955220  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:15.992995  165698 cri.go:89] found id: ""
	I0617 12:05:15.993034  165698 logs.go:276] 0 containers: []
	W0617 12:05:15.993053  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:15.993060  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:15.993127  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:16.032603  165698 cri.go:89] found id: ""
	I0617 12:05:16.032638  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.032650  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:16.032658  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:16.032716  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:16.071770  165698 cri.go:89] found id: ""
	I0617 12:05:16.071804  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.071816  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:16.071824  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:16.071899  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:16.106172  165698 cri.go:89] found id: ""
	I0617 12:05:16.106206  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.106218  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:16.106226  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:16.106292  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:16.139406  165698 cri.go:89] found id: ""
	I0617 12:05:16.139436  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.139443  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:16.139449  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:16.139517  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:16.174513  165698 cri.go:89] found id: ""
	I0617 12:05:16.174554  165698 logs.go:276] 0 containers: []
	W0617 12:05:16.174565  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:16.174580  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:16.174597  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:16.240912  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:16.240940  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:16.240958  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:16.323853  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:16.323891  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:16.372632  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:16.372659  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:16.428367  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:16.428406  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:15.224918  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:17.725226  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:15.332512  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:17.833710  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:19.334324  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:21.334654  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:18.943551  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:18.957394  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:18.957490  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:18.991967  165698 cri.go:89] found id: ""
	I0617 12:05:18.992006  165698 logs.go:276] 0 containers: []
	W0617 12:05:18.992017  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:18.992027  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:18.992092  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:19.025732  165698 cri.go:89] found id: ""
	I0617 12:05:19.025763  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.025775  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:19.025783  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:19.025856  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:19.061786  165698 cri.go:89] found id: ""
	I0617 12:05:19.061820  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.061830  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:19.061838  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:19.061906  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:19.098819  165698 cri.go:89] found id: ""
	I0617 12:05:19.098856  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.098868  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:19.098876  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:19.098947  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:19.139840  165698 cri.go:89] found id: ""
	I0617 12:05:19.139877  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.139886  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:19.139894  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:19.139965  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:19.176546  165698 cri.go:89] found id: ""
	I0617 12:05:19.176578  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.176590  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:19.176598  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:19.176671  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:19.209948  165698 cri.go:89] found id: ""
	I0617 12:05:19.209985  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.209997  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:19.210005  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:19.210087  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:19.246751  165698 cri.go:89] found id: ""
	I0617 12:05:19.246788  165698 logs.go:276] 0 containers: []
	W0617 12:05:19.246799  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:19.246812  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:19.246830  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:19.322272  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:19.322316  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:19.370147  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:19.370187  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:19.422699  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:19.422749  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:19.437255  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:19.437284  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:19.510077  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:22.010840  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:22.024791  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:22.024879  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:22.060618  165698 cri.go:89] found id: ""
	I0617 12:05:22.060658  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.060667  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:22.060674  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:22.060742  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:22.100228  165698 cri.go:89] found id: ""
	I0617 12:05:22.100259  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.100268  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:22.100274  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:22.100343  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:22.135629  165698 cri.go:89] found id: ""
	I0617 12:05:22.135657  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.135665  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:22.135671  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:22.135730  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:22.186027  165698 cri.go:89] found id: ""
	I0617 12:05:22.186064  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.186076  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:22.186085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:22.186148  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:22.220991  165698 cri.go:89] found id: ""
	I0617 12:05:22.221019  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.221029  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:22.221037  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:22.221104  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:22.266306  165698 cri.go:89] found id: ""
	I0617 12:05:22.266337  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.266348  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:22.266357  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:22.266414  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:22.303070  165698 cri.go:89] found id: ""
	I0617 12:05:22.303104  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.303116  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:22.303124  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:22.303190  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:22.339792  165698 cri.go:89] found id: ""
	I0617 12:05:22.339819  165698 logs.go:276] 0 containers: []
	W0617 12:05:22.339829  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:22.339840  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:22.339856  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:22.422360  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:22.422397  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:22.465744  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:22.465777  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:22.516199  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:22.516232  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:22.529961  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:22.529983  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:22.601519  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:20.225369  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:22.226699  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:19.834562  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:21.837426  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:23.336540  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:25.835706  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:25.102655  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:25.116893  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:25.116959  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:25.156370  165698 cri.go:89] found id: ""
	I0617 12:05:25.156396  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.156404  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:25.156410  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:25.156468  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:25.193123  165698 cri.go:89] found id: ""
	I0617 12:05:25.193199  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.193221  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:25.193234  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:25.193301  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:25.232182  165698 cri.go:89] found id: ""
	I0617 12:05:25.232209  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.232219  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:25.232227  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:25.232285  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:25.266599  165698 cri.go:89] found id: ""
	I0617 12:05:25.266630  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.266639  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:25.266645  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:25.266701  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:25.308732  165698 cri.go:89] found id: ""
	I0617 12:05:25.308762  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.308770  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:25.308776  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:25.308836  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:25.348817  165698 cri.go:89] found id: ""
	I0617 12:05:25.348858  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.348871  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:25.348879  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:25.348946  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:25.389343  165698 cri.go:89] found id: ""
	I0617 12:05:25.389375  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.389387  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:25.389393  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:25.389452  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:25.427014  165698 cri.go:89] found id: ""
	I0617 12:05:25.427043  165698 logs.go:276] 0 containers: []
	W0617 12:05:25.427055  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:25.427067  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:25.427083  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:25.441361  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:25.441390  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:25.518967  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:25.518993  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:25.519006  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:25.601411  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:25.601450  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:25.651636  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:25.651674  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:24.725515  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:27.223821  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:24.333548  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:26.832428  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:27.836661  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:30.334313  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:32.336489  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:28.202148  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:28.215710  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:28.215792  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:28.254961  165698 cri.go:89] found id: ""
	I0617 12:05:28.254986  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.255000  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:28.255007  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:28.255061  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:28.292574  165698 cri.go:89] found id: ""
	I0617 12:05:28.292606  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.292614  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:28.292620  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:28.292683  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:28.329036  165698 cri.go:89] found id: ""
	I0617 12:05:28.329067  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.329077  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:28.329085  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:28.329152  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:28.366171  165698 cri.go:89] found id: ""
	I0617 12:05:28.366197  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.366206  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:28.366212  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:28.366273  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:28.401380  165698 cri.go:89] found id: ""
	I0617 12:05:28.401407  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.401417  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:28.401424  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:28.401486  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:28.438767  165698 cri.go:89] found id: ""
	I0617 12:05:28.438798  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.438810  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:28.438817  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:28.438876  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:28.472706  165698 cri.go:89] found id: ""
	I0617 12:05:28.472761  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.472772  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:28.472779  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:28.472829  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:28.509525  165698 cri.go:89] found id: ""
	I0617 12:05:28.509548  165698 logs.go:276] 0 containers: []
	W0617 12:05:28.509556  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:28.509565  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:28.509577  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:28.606008  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:28.606059  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:28.665846  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:28.665874  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:28.721599  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:28.721627  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:28.735040  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:28.735062  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:28.811954  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:31.312554  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:31.326825  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:31.326905  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:31.364862  165698 cri.go:89] found id: ""
	I0617 12:05:31.364891  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.364902  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:31.364910  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:31.364976  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:31.396979  165698 cri.go:89] found id: ""
	I0617 12:05:31.397013  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.397027  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:31.397035  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:31.397098  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:31.430617  165698 cri.go:89] found id: ""
	I0617 12:05:31.430647  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.430657  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:31.430665  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:31.430728  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:31.462308  165698 cri.go:89] found id: ""
	I0617 12:05:31.462338  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.462345  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:31.462350  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:31.462399  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:31.495406  165698 cri.go:89] found id: ""
	I0617 12:05:31.495435  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.495444  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:31.495452  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:31.495553  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:31.538702  165698 cri.go:89] found id: ""
	I0617 12:05:31.538729  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.538739  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:31.538750  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:31.538813  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:31.572637  165698 cri.go:89] found id: ""
	I0617 12:05:31.572666  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.572677  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:31.572685  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:31.572745  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:31.609307  165698 cri.go:89] found id: ""
	I0617 12:05:31.609341  165698 logs.go:276] 0 containers: []
	W0617 12:05:31.609352  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:31.609364  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:31.609380  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:31.622445  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:31.622471  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:31.699170  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:31.699191  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:31.699209  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:31.775115  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:31.775156  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:31.815836  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:31.815866  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:29.225028  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:31.727009  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:29.333400  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:31.834599  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:34.836093  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:37.335140  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:34.372097  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:34.393542  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:34.393607  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:34.437265  165698 cri.go:89] found id: ""
	I0617 12:05:34.437294  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.437305  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:34.437314  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:34.437382  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:34.474566  165698 cri.go:89] found id: ""
	I0617 12:05:34.474596  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.474609  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:34.474617  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:34.474680  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:34.510943  165698 cri.go:89] found id: ""
	I0617 12:05:34.510975  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.510986  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:34.511000  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:34.511072  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:34.548124  165698 cri.go:89] found id: ""
	I0617 12:05:34.548160  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.548172  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:34.548179  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:34.548241  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:34.582428  165698 cri.go:89] found id: ""
	I0617 12:05:34.582453  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.582460  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:34.582467  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:34.582514  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:34.616895  165698 cri.go:89] found id: ""
	I0617 12:05:34.616937  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.616950  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:34.616957  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:34.617019  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:34.656116  165698 cri.go:89] found id: ""
	I0617 12:05:34.656144  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.656155  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:34.656162  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:34.656226  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:34.695649  165698 cri.go:89] found id: ""
	I0617 12:05:34.695680  165698 logs.go:276] 0 containers: []
	W0617 12:05:34.695692  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:34.695705  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:34.695722  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:34.747910  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:34.747956  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:34.762177  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:34.762206  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:34.840395  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:34.840423  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:34.840440  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:34.922962  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:34.923002  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:37.464659  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:37.480351  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:37.480416  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:37.521249  165698 cri.go:89] found id: ""
	I0617 12:05:37.521279  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.521286  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:37.521293  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:37.521340  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:37.561053  165698 cri.go:89] found id: ""
	I0617 12:05:37.561079  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.561087  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:37.561094  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:37.561151  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:37.599019  165698 cri.go:89] found id: ""
	I0617 12:05:37.599057  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.599066  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:37.599074  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:37.599134  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:37.638276  165698 cri.go:89] found id: ""
	I0617 12:05:37.638304  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.638315  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:37.638323  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:37.638389  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:37.677819  165698 cri.go:89] found id: ""
	I0617 12:05:37.677845  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.677853  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:37.677859  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:37.677910  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:37.715850  165698 cri.go:89] found id: ""
	I0617 12:05:37.715877  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.715888  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:37.715897  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:37.715962  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:37.755533  165698 cri.go:89] found id: ""
	I0617 12:05:37.755563  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.755570  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:37.755576  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:37.755636  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:37.791826  165698 cri.go:89] found id: ""
	I0617 12:05:37.791850  165698 logs.go:276] 0 containers: []
	W0617 12:05:37.791859  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:37.791872  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:37.791888  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:37.844824  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:37.844853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:37.860933  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:37.860963  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:37.926497  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:37.926519  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:37.926535  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:34.224078  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:36.224464  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:38.224753  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:34.333888  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:36.832374  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:39.336299  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:41.834494  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:38.003814  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:38.003853  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:40.546386  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:40.560818  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:40.560896  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:40.596737  165698 cri.go:89] found id: ""
	I0617 12:05:40.596777  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.596784  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:40.596791  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:40.596844  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:40.631518  165698 cri.go:89] found id: ""
	I0617 12:05:40.631556  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.631570  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:40.631611  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:40.631683  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:40.674962  165698 cri.go:89] found id: ""
	I0617 12:05:40.674997  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.675006  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:40.675012  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:40.675064  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:40.716181  165698 cri.go:89] found id: ""
	I0617 12:05:40.716210  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.716218  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:40.716226  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:40.716286  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:40.756312  165698 cri.go:89] found id: ""
	I0617 12:05:40.756339  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.756348  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:40.756353  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:40.756406  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:40.791678  165698 cri.go:89] found id: ""
	I0617 12:05:40.791733  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.791750  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:40.791759  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:40.791830  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:40.830717  165698 cri.go:89] found id: ""
	I0617 12:05:40.830754  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.830766  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:40.830774  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:40.830854  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:40.868139  165698 cri.go:89] found id: ""
	I0617 12:05:40.868169  165698 logs.go:276] 0 containers: []
	W0617 12:05:40.868178  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:40.868198  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:40.868224  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:40.920319  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:40.920353  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:40.934948  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:40.934974  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:41.005349  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:41.005371  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:41.005388  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:41.086783  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:41.086842  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:40.724767  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:43.223836  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:38.834031  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:41.331190  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:43.332593  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:44.334114  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:46.334595  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:43.625515  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:43.638942  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:43.639019  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:43.673703  165698 cri.go:89] found id: ""
	I0617 12:05:43.673735  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.673747  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:43.673756  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:43.673822  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:43.709417  165698 cri.go:89] found id: ""
	I0617 12:05:43.709449  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.709460  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:43.709468  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:43.709529  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:43.742335  165698 cri.go:89] found id: ""
	I0617 12:05:43.742368  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.742379  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:43.742389  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:43.742449  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:43.779112  165698 cri.go:89] found id: ""
	I0617 12:05:43.779141  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.779150  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:43.779155  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:43.779219  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:43.813362  165698 cri.go:89] found id: ""
	I0617 12:05:43.813397  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.813406  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:43.813414  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:43.813464  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:43.850456  165698 cri.go:89] found id: ""
	I0617 12:05:43.850484  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.850493  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:43.850499  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:43.850547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:43.884527  165698 cri.go:89] found id: ""
	I0617 12:05:43.884555  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.884564  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:43.884571  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:43.884632  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:43.921440  165698 cri.go:89] found id: ""
	I0617 12:05:43.921476  165698 logs.go:276] 0 containers: []
	W0617 12:05:43.921488  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:43.921501  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:43.921517  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:43.973687  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:43.973727  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:43.988114  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:43.988143  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:44.055084  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:44.055119  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:44.055138  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:44.134628  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:44.134665  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:46.677852  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:46.690688  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:46.690747  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:46.724055  165698 cri.go:89] found id: ""
	I0617 12:05:46.724090  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.724101  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:46.724110  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:46.724171  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:46.759119  165698 cri.go:89] found id: ""
	I0617 12:05:46.759150  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.759161  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:46.759169  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:46.759227  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:46.796392  165698 cri.go:89] found id: ""
	I0617 12:05:46.796424  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.796435  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:46.796442  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:46.796504  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:46.831727  165698 cri.go:89] found id: ""
	I0617 12:05:46.831761  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.831770  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:46.831777  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:46.831845  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:46.866662  165698 cri.go:89] found id: ""
	I0617 12:05:46.866693  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.866702  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:46.866708  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:46.866757  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:46.905045  165698 cri.go:89] found id: ""
	I0617 12:05:46.905070  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.905078  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:46.905084  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:46.905130  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:46.940879  165698 cri.go:89] found id: ""
	I0617 12:05:46.940907  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.940915  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:46.940926  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:46.940974  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:46.977247  165698 cri.go:89] found id: ""
	I0617 12:05:46.977290  165698 logs.go:276] 0 containers: []
	W0617 12:05:46.977301  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:46.977314  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:46.977331  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:47.046094  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:47.046116  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:47.046133  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:47.122994  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:47.123038  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:47.166273  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:47.166313  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:47.221392  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:47.221429  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:45.228807  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:47.723584  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:45.834805  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:48.333121  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:48.335758  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:50.833989  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:49.739113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:49.752880  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:49.753004  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:49.791177  165698 cri.go:89] found id: ""
	I0617 12:05:49.791218  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.791242  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:49.791251  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:49.791322  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:49.831602  165698 cri.go:89] found id: ""
	I0617 12:05:49.831633  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.831644  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:49.831652  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:49.831719  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:49.870962  165698 cri.go:89] found id: ""
	I0617 12:05:49.870998  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.871011  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:49.871019  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:49.871092  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:49.917197  165698 cri.go:89] found id: ""
	I0617 12:05:49.917232  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.917243  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:49.917252  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:49.917320  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:49.952997  165698 cri.go:89] found id: ""
	I0617 12:05:49.953034  165698 logs.go:276] 0 containers: []
	W0617 12:05:49.953047  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:49.953056  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:49.953114  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:50.001925  165698 cri.go:89] found id: ""
	I0617 12:05:50.001965  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.001977  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:50.001986  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:50.002059  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:50.043374  165698 cri.go:89] found id: ""
	I0617 12:05:50.043403  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.043412  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:50.043419  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:50.043496  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:50.082974  165698 cri.go:89] found id: ""
	I0617 12:05:50.083009  165698 logs.go:276] 0 containers: []
	W0617 12:05:50.083020  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:50.083029  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:50.083043  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:50.134116  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:50.134159  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:50.148478  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:50.148511  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:50.227254  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:50.227276  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:50.227288  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:50.305920  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:50.305960  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:52.848811  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:52.862612  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:52.862669  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:52.896379  165698 cri.go:89] found id: ""
	I0617 12:05:52.896410  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.896421  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:52.896429  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:52.896488  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:52.933387  165698 cri.go:89] found id: ""
	I0617 12:05:52.933422  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.933432  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:52.933439  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:52.933501  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:52.971055  165698 cri.go:89] found id: ""
	I0617 12:05:52.971091  165698 logs.go:276] 0 containers: []
	W0617 12:05:52.971102  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:52.971110  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:52.971168  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:49.724816  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:52.224660  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:50.334092  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:52.831686  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:52.835473  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:55.334017  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:57.334957  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:53.003815  165698 cri.go:89] found id: ""
	I0617 12:05:53.003846  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.003857  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:53.003864  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:53.003927  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:53.039133  165698 cri.go:89] found id: ""
	I0617 12:05:53.039161  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.039169  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:53.039175  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:53.039229  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:53.077703  165698 cri.go:89] found id: ""
	I0617 12:05:53.077756  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.077773  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:53.077780  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:53.077831  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:53.119187  165698 cri.go:89] found id: ""
	I0617 12:05:53.119216  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.119223  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:53.119230  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:53.119287  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:53.154423  165698 cri.go:89] found id: ""
	I0617 12:05:53.154457  165698 logs.go:276] 0 containers: []
	W0617 12:05:53.154467  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:53.154480  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:53.154496  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:53.202745  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:53.202778  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:53.216510  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:53.216537  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:53.295687  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:53.295712  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:53.295732  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:53.375064  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:53.375095  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:55.915113  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:55.929155  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:55.929239  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:55.964589  165698 cri.go:89] found id: ""
	I0617 12:05:55.964625  165698 logs.go:276] 0 containers: []
	W0617 12:05:55.964634  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:55.964640  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:55.964702  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:56.003659  165698 cri.go:89] found id: ""
	I0617 12:05:56.003691  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.003701  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:56.003709  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:56.003778  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:56.039674  165698 cri.go:89] found id: ""
	I0617 12:05:56.039707  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.039717  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:56.039724  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:56.039786  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:56.077695  165698 cri.go:89] found id: ""
	I0617 12:05:56.077736  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.077748  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:56.077756  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:56.077826  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:56.116397  165698 cri.go:89] found id: ""
	I0617 12:05:56.116430  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.116442  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:56.116451  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:56.116512  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:56.152395  165698 cri.go:89] found id: ""
	I0617 12:05:56.152433  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.152445  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:56.152454  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:56.152513  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:56.189740  165698 cri.go:89] found id: ""
	I0617 12:05:56.189776  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.189788  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:56.189796  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:56.189866  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:56.228017  165698 cri.go:89] found id: ""
	I0617 12:05:56.228047  165698 logs.go:276] 0 containers: []
	W0617 12:05:56.228055  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:56.228063  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:56.228076  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:56.279032  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:56.279079  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:56.294369  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:56.294394  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:56.369507  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:56.369535  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:56.369551  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:56.454797  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:56.454833  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:54.725303  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:56.726247  165060 pod_ready.go:102] pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:56.726280  165060 pod_ready.go:81] duration metric: took 4m0.008373114s for pod "metrics-server-569cc877fc-dmhfs" in "kube-system" namespace to be "Ready" ...
	E0617 12:05:56.726291  165060 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0617 12:05:56.726298  165060 pod_ready.go:38] duration metric: took 4m3.608691328s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:05:56.726315  165060 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:05:56.726352  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:56.726411  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:56.784765  165060 cri.go:89] found id: "5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:05:56.784792  165060 cri.go:89] found id: ""
	I0617 12:05:56.784803  165060 logs.go:276] 1 containers: [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3]
	I0617 12:05:56.784865  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.791125  165060 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:56.791189  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:56.830691  165060 cri.go:89] found id: "fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:05:56.830715  165060 cri.go:89] found id: ""
	I0617 12:05:56.830725  165060 logs.go:276] 1 containers: [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9]
	I0617 12:05:56.830785  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.836214  165060 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:56.836282  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:56.875812  165060 cri.go:89] found id: "c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:05:56.875830  165060 cri.go:89] found id: ""
	I0617 12:05:56.875837  165060 logs.go:276] 1 containers: [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7]
	I0617 12:05:56.875891  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.880190  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:56.880247  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:56.925155  165060 cri.go:89] found id: "157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:05:56.925178  165060 cri.go:89] found id: ""
	I0617 12:05:56.925186  165060 logs.go:276] 1 containers: [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d]
	I0617 12:05:56.925231  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.930317  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:56.930384  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:56.972479  165060 cri.go:89] found id: "c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:05:56.972503  165060 cri.go:89] found id: ""
	I0617 12:05:56.972512  165060 logs.go:276] 1 containers: [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d]
	I0617 12:05:56.972559  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:56.977635  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:56.977696  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:57.012791  165060 cri.go:89] found id: "2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:05:57.012816  165060 cri.go:89] found id: ""
	I0617 12:05:57.012826  165060 logs.go:276] 1 containers: [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079]
	I0617 12:05:57.012882  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:57.016856  165060 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:57.016909  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:57.052111  165060 cri.go:89] found id: ""
	I0617 12:05:57.052146  165060 logs.go:276] 0 containers: []
	W0617 12:05:57.052156  165060 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:57.052163  165060 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:05:57.052211  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:05:57.094600  165060 cri.go:89] found id: "02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:05:57.094619  165060 cri.go:89] found id: "7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:05:57.094622  165060 cri.go:89] found id: ""
	I0617 12:05:57.094630  165060 logs.go:276] 2 containers: [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36]
	I0617 12:05:57.094700  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:57.099250  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:05:57.104252  165060 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:57.104281  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:57.162000  165060 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:57.162027  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:05:57.285448  165060 logs.go:123] Gathering logs for etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] ...
	I0617 12:05:57.285490  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:05:57.340781  165060 logs.go:123] Gathering logs for coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] ...
	I0617 12:05:57.340820  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:05:57.383507  165060 logs.go:123] Gathering logs for kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] ...
	I0617 12:05:57.383540  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:05:57.428747  165060 logs.go:123] Gathering logs for kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] ...
	I0617 12:05:57.428792  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:05:57.468739  165060 logs.go:123] Gathering logs for kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] ...
	I0617 12:05:57.468770  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:05:57.531317  165060 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:57.531355  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:58.063787  165060 logs.go:123] Gathering logs for container status ...
	I0617 12:05:58.063838  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:05:58.129384  165060 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:58.129416  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:58.144078  165060 logs.go:123] Gathering logs for kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] ...
	I0617 12:05:58.144152  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:05:58.189028  165060 logs.go:123] Gathering logs for storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] ...
	I0617 12:05:58.189068  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:05:58.227144  165060 logs.go:123] Gathering logs for storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] ...
	I0617 12:05:58.227178  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:05:54.838580  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:57.333884  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:59.836198  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:01.837155  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:05:58.995221  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:05:59.008481  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:05:59.008555  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:05:59.043854  165698 cri.go:89] found id: ""
	I0617 12:05:59.043887  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.043914  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:05:59.043935  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:05:59.044003  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:05:59.081488  165698 cri.go:89] found id: ""
	I0617 12:05:59.081522  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.081530  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:05:59.081537  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:05:59.081596  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:05:59.118193  165698 cri.go:89] found id: ""
	I0617 12:05:59.118222  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.118232  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:05:59.118240  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:05:59.118306  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:05:59.150286  165698 cri.go:89] found id: ""
	I0617 12:05:59.150315  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.150327  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:05:59.150335  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:05:59.150381  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:05:59.191426  165698 cri.go:89] found id: ""
	I0617 12:05:59.191450  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.191485  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:05:59.191493  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:05:59.191547  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:05:59.224933  165698 cri.go:89] found id: ""
	I0617 12:05:59.224965  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.224974  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:05:59.224998  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:05:59.225061  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:05:59.255929  165698 cri.go:89] found id: ""
	I0617 12:05:59.255956  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.255965  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:05:59.255971  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:05:59.256025  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:05:59.293072  165698 cri.go:89] found id: ""
	I0617 12:05:59.293097  165698 logs.go:276] 0 containers: []
	W0617 12:05:59.293104  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:05:59.293114  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:05:59.293126  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:05:59.354240  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:05:59.354267  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:05:59.367715  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:05:59.367744  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:05:59.446352  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:05:59.446381  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:05:59.446396  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:05:59.528701  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:05:59.528738  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:02.071616  165698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:02.088050  165698 kubeadm.go:591] duration metric: took 4m3.493743262s to restartPrimaryControlPlane
	W0617 12:06:02.088159  165698 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0617 12:06:02.088194  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:06:02.552133  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:02.570136  165698 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:06:02.582299  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:06:02.594775  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:06:02.594809  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:06:02.594867  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:06:02.605875  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:06:02.605954  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:06:02.617780  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:06:02.628284  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:06:02.628359  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:06:02.639128  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:06:02.650079  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:06:02.650144  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:06:02.660879  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:06:02.671170  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:06:02.671249  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:06:02.682071  165698 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:06:02.753750  165698 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0617 12:06:02.753913  165698 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:06:02.897384  165698 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:06:02.897530  165698 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:06:02.897685  165698 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:06:03.079116  165698 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:06:00.764533  165060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:00.781564  165060 api_server.go:72] duration metric: took 4m14.875617542s to wait for apiserver process to appear ...
	I0617 12:06:00.781593  165060 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:06:00.781642  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:00.781706  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:00.817980  165060 cri.go:89] found id: "5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:00.818013  165060 cri.go:89] found id: ""
	I0617 12:06:00.818024  165060 logs.go:276] 1 containers: [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3]
	I0617 12:06:00.818080  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.822664  165060 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:00.822759  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:00.861518  165060 cri.go:89] found id: "fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:00.861545  165060 cri.go:89] found id: ""
	I0617 12:06:00.861556  165060 logs.go:276] 1 containers: [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9]
	I0617 12:06:00.861614  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.865885  165060 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:00.865973  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:00.900844  165060 cri.go:89] found id: "c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:00.900864  165060 cri.go:89] found id: ""
	I0617 12:06:00.900875  165060 logs.go:276] 1 containers: [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7]
	I0617 12:06:00.900930  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.905253  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:00.905317  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:00.938998  165060 cri.go:89] found id: "157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:00.939036  165060 cri.go:89] found id: ""
	I0617 12:06:00.939046  165060 logs.go:276] 1 containers: [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d]
	I0617 12:06:00.939114  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.943170  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:00.943234  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:00.982923  165060 cri.go:89] found id: "c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:00.982953  165060 cri.go:89] found id: ""
	I0617 12:06:00.982964  165060 logs.go:276] 1 containers: [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d]
	I0617 12:06:00.983034  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:00.987696  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:00.987769  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:01.033789  165060 cri.go:89] found id: "2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:06:01.033825  165060 cri.go:89] found id: ""
	I0617 12:06:01.033837  165060 logs.go:276] 1 containers: [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079]
	I0617 12:06:01.033901  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:01.038800  165060 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:01.038861  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:01.077797  165060 cri.go:89] found id: ""
	I0617 12:06:01.077834  165060 logs.go:276] 0 containers: []
	W0617 12:06:01.077846  165060 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:01.077855  165060 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:01.077916  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:01.116275  165060 cri.go:89] found id: "02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:01.116296  165060 cri.go:89] found id: "7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:01.116303  165060 cri.go:89] found id: ""
	I0617 12:06:01.116311  165060 logs.go:276] 2 containers: [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36]
	I0617 12:06:01.116365  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:01.121088  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:01.125393  165060 logs.go:123] Gathering logs for container status ...
	I0617 12:06:01.125417  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:01.170817  165060 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:01.170844  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:01.223072  165060 logs.go:123] Gathering logs for kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] ...
	I0617 12:06:01.223114  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:01.269212  165060 logs.go:123] Gathering logs for kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] ...
	I0617 12:06:01.269245  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:01.313518  165060 logs.go:123] Gathering logs for kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] ...
	I0617 12:06:01.313557  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:01.357935  165060 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:01.357965  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:01.784493  165060 logs.go:123] Gathering logs for storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] ...
	I0617 12:06:01.784542  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:01.825824  165060 logs.go:123] Gathering logs for storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] ...
	I0617 12:06:01.825851  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:01.866216  165060 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:01.866252  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:01.881292  165060 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:01.881316  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:02.000026  165060 logs.go:123] Gathering logs for etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] ...
	I0617 12:06:02.000063  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:02.043491  165060 logs.go:123] Gathering logs for coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] ...
	I0617 12:06:02.043524  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:02.081957  165060 logs.go:123] Gathering logs for kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] ...
	I0617 12:06:02.081984  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:05:59.835769  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:02.332739  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:03.080903  165698 out.go:204]   - Generating certificates and keys ...
	I0617 12:06:03.081006  165698 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:06:03.081080  165698 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:06:03.081168  165698 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:06:03.081250  165698 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:06:03.081377  165698 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:06:03.081457  165698 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:06:03.082418  165698 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:06:03.083003  165698 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:06:03.083917  165698 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:06:03.084820  165698 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:06:03.085224  165698 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:06:03.085307  165698 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:06:03.203342  165698 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:06:03.430428  165698 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:06:03.570422  165698 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:06:03.772092  165698 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:06:03.793105  165698 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:06:03.793206  165698 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:06:03.793261  165698 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:06:03.919738  165698 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:06:04.333408  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:06.333963  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:03.921593  165698 out.go:204]   - Booting up control plane ...
	I0617 12:06:03.921708  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:06:03.928168  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:06:03.928279  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:06:03.937197  165698 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:06:03.939967  165698 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 12:06:04.644102  165060 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I0617 12:06:04.648733  165060 api_server.go:279] https://192.168.72.199:8443/healthz returned 200:
	ok
	I0617 12:06:04.649862  165060 api_server.go:141] control plane version: v1.30.1
	I0617 12:06:04.649894  165060 api_server.go:131] duration metric: took 3.86829173s to wait for apiserver health ...
	I0617 12:06:04.649905  165060 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:06:04.649936  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:04.649997  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:04.688904  165060 cri.go:89] found id: "5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:04.688923  165060 cri.go:89] found id: ""
	I0617 12:06:04.688931  165060 logs.go:276] 1 containers: [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3]
	I0617 12:06:04.688975  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.695049  165060 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:04.695110  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:04.730292  165060 cri.go:89] found id: "fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:04.730314  165060 cri.go:89] found id: ""
	I0617 12:06:04.730322  165060 logs.go:276] 1 containers: [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9]
	I0617 12:06:04.730373  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.734432  165060 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:04.734486  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:04.771401  165060 cri.go:89] found id: "c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:04.771418  165060 cri.go:89] found id: ""
	I0617 12:06:04.771426  165060 logs.go:276] 1 containers: [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7]
	I0617 12:06:04.771496  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.775822  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:04.775876  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:04.816111  165060 cri.go:89] found id: "157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:04.816131  165060 cri.go:89] found id: ""
	I0617 12:06:04.816139  165060 logs.go:276] 1 containers: [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d]
	I0617 12:06:04.816185  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.820614  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:04.820672  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:04.865387  165060 cri.go:89] found id: "c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:04.865411  165060 cri.go:89] found id: ""
	I0617 12:06:04.865421  165060 logs.go:276] 1 containers: [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d]
	I0617 12:06:04.865479  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.870192  165060 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:04.870263  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:04.912698  165060 cri.go:89] found id: "2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:06:04.912723  165060 cri.go:89] found id: ""
	I0617 12:06:04.912734  165060 logs.go:276] 1 containers: [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079]
	I0617 12:06:04.912796  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:04.917484  165060 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:04.917563  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:04.954076  165060 cri.go:89] found id: ""
	I0617 12:06:04.954109  165060 logs.go:276] 0 containers: []
	W0617 12:06:04.954120  165060 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:04.954129  165060 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:04.954196  165060 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:04.995832  165060 cri.go:89] found id: "02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:04.995858  165060 cri.go:89] found id: "7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:04.995862  165060 cri.go:89] found id: ""
	I0617 12:06:04.995869  165060 logs.go:276] 2 containers: [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36]
	I0617 12:06:04.995928  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:05.000741  165060 ssh_runner.go:195] Run: which crictl
	I0617 12:06:05.004995  165060 logs.go:123] Gathering logs for storage-provisioner [02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92] ...
	I0617 12:06:05.005026  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e13a25f376ff7704eb9bde517216af071517c349370d78ddf9b80307457f92"
	I0617 12:06:05.040651  165060 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:05.040692  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:05.461644  165060 logs.go:123] Gathering logs for container status ...
	I0617 12:06:05.461685  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:05.508706  165060 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:05.508733  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:05.562418  165060 logs.go:123] Gathering logs for kube-apiserver [5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3] ...
	I0617 12:06:05.562461  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7549e0748026a9e69358dbe81a27f130d87610e219265916a95c43ffa3a1a3"
	I0617 12:06:05.606489  165060 logs.go:123] Gathering logs for etcd [fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9] ...
	I0617 12:06:05.606527  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb99e2cd3471db9b46a180f2eb9a8d70f04f022cce88dc12380bb0465875d4b9"
	I0617 12:06:05.651719  165060 logs.go:123] Gathering logs for coredns [c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7] ...
	I0617 12:06:05.651753  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c610c7cafac56581ace61966828ab78bcb03484fe7546ab1cd22b9b6bf3393d7"
	I0617 12:06:05.688736  165060 logs.go:123] Gathering logs for kube-proxy [c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d] ...
	I0617 12:06:05.688772  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2c534f434b08887ca517b25b405e6efde2c9d3b67bbc7215b5c39bcfe94982d"
	I0617 12:06:05.730649  165060 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:05.730679  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:05.745482  165060 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:05.745511  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:05.849002  165060 logs.go:123] Gathering logs for kube-scheduler [157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d] ...
	I0617 12:06:05.849025  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157a0a340155527cc792d8fa5914539db52ce2ca9bfbd3a9db981f01b3fd559d"
	I0617 12:06:05.890802  165060 logs.go:123] Gathering logs for kube-controller-manager [2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079] ...
	I0617 12:06:05.890836  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2436d8198185596a481b3da0d6245bad3ad470095c0adc1cd7a6f5238ee91079"
	I0617 12:06:05.946444  165060 logs.go:123] Gathering logs for storage-provisioner [7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36] ...
	I0617 12:06:05.946474  165060 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a03f8aca2ce90dd9e2ab33b8cdb8736c7299feb128b604b903666307623be36"
	I0617 12:06:04.332977  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:06.834683  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:08.489561  165060 system_pods.go:59] 8 kube-system pods found
	I0617 12:06:08.489593  165060 system_pods.go:61] "coredns-7db6d8ff4d-9bbjg" [1ba0eee5-436e-4c83-b5ce-3c907d66b641] Running
	I0617 12:06:08.489597  165060 system_pods.go:61] "etcd-embed-certs-136195" [6dc81a80-c56b-4517-af82-c450cf9578f5] Running
	I0617 12:06:08.489601  165060 system_pods.go:61] "kube-apiserver-embed-certs-136195" [bd61a715-2471-4dca-aa48-a157531ebd6b] Running
	I0617 12:06:08.489605  165060 system_pods.go:61] "kube-controller-manager-embed-certs-136195" [194db4b0-75c2-4905-8e4d-813185497b51] Running
	I0617 12:06:08.489607  165060 system_pods.go:61] "kube-proxy-25d5n" [52b6d09a-899f-40c4-b1f3-7842ae755165] Running
	I0617 12:06:08.489610  165060 system_pods.go:61] "kube-scheduler-embed-certs-136195" [b04d3798-f465-4f82-9ec7-777ea62d5b94] Running
	I0617 12:06:08.489616  165060 system_pods.go:61] "metrics-server-569cc877fc-dmhfs" [31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:08.489620  165060 system_pods.go:61] "storage-provisioner" [4b04a38a-5006-4496-a24d-0940029193de] Running
	I0617 12:06:08.489626  165060 system_pods.go:74] duration metric: took 3.839715717s to wait for pod list to return data ...
	I0617 12:06:08.489633  165060 default_sa.go:34] waiting for default service account to be created ...
	I0617 12:06:08.491984  165060 default_sa.go:45] found service account: "default"
	I0617 12:06:08.492007  165060 default_sa.go:55] duration metric: took 2.365306ms for default service account to be created ...
	I0617 12:06:08.492014  165060 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 12:06:08.497834  165060 system_pods.go:86] 8 kube-system pods found
	I0617 12:06:08.497865  165060 system_pods.go:89] "coredns-7db6d8ff4d-9bbjg" [1ba0eee5-436e-4c83-b5ce-3c907d66b641] Running
	I0617 12:06:08.497873  165060 system_pods.go:89] "etcd-embed-certs-136195" [6dc81a80-c56b-4517-af82-c450cf9578f5] Running
	I0617 12:06:08.497880  165060 system_pods.go:89] "kube-apiserver-embed-certs-136195" [bd61a715-2471-4dca-aa48-a157531ebd6b] Running
	I0617 12:06:08.497887  165060 system_pods.go:89] "kube-controller-manager-embed-certs-136195" [194db4b0-75c2-4905-8e4d-813185497b51] Running
	I0617 12:06:08.497891  165060 system_pods.go:89] "kube-proxy-25d5n" [52b6d09a-899f-40c4-b1f3-7842ae755165] Running
	I0617 12:06:08.497899  165060 system_pods.go:89] "kube-scheduler-embed-certs-136195" [b04d3798-f465-4f82-9ec7-777ea62d5b94] Running
	I0617 12:06:08.497905  165060 system_pods.go:89] "metrics-server-569cc877fc-dmhfs" [31d01cf6-9cac-4a1f-8cdc-63f9d3db21d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:08.497914  165060 system_pods.go:89] "storage-provisioner" [4b04a38a-5006-4496-a24d-0940029193de] Running
	I0617 12:06:08.497921  165060 system_pods.go:126] duration metric: took 5.901391ms to wait for k8s-apps to be running ...
	I0617 12:06:08.497927  165060 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 12:06:08.497970  165060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:08.520136  165060 system_svc.go:56] duration metric: took 22.203601ms WaitForService to wait for kubelet
	I0617 12:06:08.520159  165060 kubeadm.go:576] duration metric: took 4m22.614222011s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:06:08.520178  165060 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:06:08.522704  165060 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:06:08.522741  165060 node_conditions.go:123] node cpu capacity is 2
	I0617 12:06:08.522758  165060 node_conditions.go:105] duration metric: took 2.57391ms to run NodePressure ...
	I0617 12:06:08.522773  165060 start.go:240] waiting for startup goroutines ...
	I0617 12:06:08.522787  165060 start.go:245] waiting for cluster config update ...
	I0617 12:06:08.522803  165060 start.go:254] writing updated cluster config ...
	I0617 12:06:08.523139  165060 ssh_runner.go:195] Run: rm -f paused
	I0617 12:06:08.577942  165060 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 12:06:08.579946  165060 out.go:177] * Done! kubectl is now configured to use "embed-certs-136195" cluster and "default" namespace by default
	I0617 12:06:08.334463  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:10.335642  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:09.331628  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:11.332586  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:13.332703  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:12.834827  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:15.334721  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:15.333004  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:17.834357  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:17.833756  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:19.835364  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:22.333742  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:20.332127  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:22.832111  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:24.333945  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:26.335021  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:25.332366  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:27.835364  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:28.833758  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:31.334155  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:29.835500  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:32.332236  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:33.833599  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:35.834190  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:34.831122  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:36.833202  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:38.334352  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:40.335399  166103 pod_ready.go:102] pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:40.335423  166103 pod_ready.go:81] duration metric: took 4m0.008367222s for pod "metrics-server-569cc877fc-n2svp" in "kube-system" namespace to be "Ready" ...
	E0617 12:06:40.335433  166103 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0617 12:06:40.335441  166103 pod_ready.go:38] duration metric: took 4m7.419505963s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:06:40.335475  166103 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:06:40.335505  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:40.335556  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:40.400354  166103 cri.go:89] found id: "5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:40.400384  166103 cri.go:89] found id: ""
	I0617 12:06:40.400394  166103 logs.go:276] 1 containers: [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b]
	I0617 12:06:40.400453  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.405124  166103 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:40.405186  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:40.440583  166103 cri.go:89] found id: "8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:40.440610  166103 cri.go:89] found id: ""
	I0617 12:06:40.440619  166103 logs.go:276] 1 containers: [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862]
	I0617 12:06:40.440665  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.445086  166103 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:40.445141  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:40.489676  166103 cri.go:89] found id: "26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:40.489698  166103 cri.go:89] found id: ""
	I0617 12:06:40.489706  166103 logs.go:276] 1 containers: [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323]
	I0617 12:06:40.489752  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.494402  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:40.494514  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:40.535486  166103 cri.go:89] found id: "2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:40.535517  166103 cri.go:89] found id: ""
	I0617 12:06:40.535527  166103 logs.go:276] 1 containers: [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b]
	I0617 12:06:40.535589  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.543265  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:40.543330  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:40.579564  166103 cri.go:89] found id: "63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:40.579588  166103 cri.go:89] found id: ""
	I0617 12:06:40.579598  166103 logs.go:276] 1 containers: [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da]
	I0617 12:06:40.579658  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.583865  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:40.583928  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:40.642408  166103 cri.go:89] found id: "36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:40.642435  166103 cri.go:89] found id: ""
	I0617 12:06:40.642445  166103 logs.go:276] 1 containers: [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685]
	I0617 12:06:40.642509  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.647892  166103 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:40.647959  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:40.698654  166103 cri.go:89] found id: ""
	I0617 12:06:40.698686  166103 logs.go:276] 0 containers: []
	W0617 12:06:40.698696  166103 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:40.698704  166103 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:40.698768  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:40.749641  166103 cri.go:89] found id: "adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:40.749663  166103 cri.go:89] found id: "e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:40.749668  166103 cri.go:89] found id: ""
	I0617 12:06:40.749678  166103 logs.go:276] 2 containers: [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc]
	I0617 12:06:40.749742  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.754926  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:40.760126  166103 logs.go:123] Gathering logs for container status ...
	I0617 12:06:40.760152  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:40.804119  166103 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:40.804159  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:40.942459  166103 logs.go:123] Gathering logs for etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] ...
	I0617 12:06:40.942495  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:40.994721  166103 logs.go:123] Gathering logs for coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] ...
	I0617 12:06:40.994761  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:41.037005  166103 logs.go:123] Gathering logs for kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] ...
	I0617 12:06:41.037040  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:41.080715  166103 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:41.080751  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:41.606478  166103 logs.go:123] Gathering logs for storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] ...
	I0617 12:06:41.606516  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:41.643963  166103 logs.go:123] Gathering logs for storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] ...
	I0617 12:06:41.644003  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:41.683405  166103 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:41.683443  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:41.737365  166103 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:41.737400  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:41.752552  166103 logs.go:123] Gathering logs for kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] ...
	I0617 12:06:41.752582  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:41.804447  166103 logs.go:123] Gathering logs for kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] ...
	I0617 12:06:41.804480  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:41.847266  166103 logs.go:123] Gathering logs for kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] ...
	I0617 12:06:41.847302  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:39.333111  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:41.836327  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:44.408776  166103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:06:44.427500  166103 api_server.go:72] duration metric: took 4m19.25316479s to wait for apiserver process to appear ...
	I0617 12:06:44.427531  166103 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:06:44.427577  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:44.427634  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:44.466379  166103 cri.go:89] found id: "5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:44.466408  166103 cri.go:89] found id: ""
	I0617 12:06:44.466418  166103 logs.go:276] 1 containers: [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b]
	I0617 12:06:44.466481  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.470832  166103 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:44.470901  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:44.511689  166103 cri.go:89] found id: "8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:44.511713  166103 cri.go:89] found id: ""
	I0617 12:06:44.511722  166103 logs.go:276] 1 containers: [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862]
	I0617 12:06:44.511769  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.516221  166103 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:44.516303  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:44.560612  166103 cri.go:89] found id: "26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:44.560634  166103 cri.go:89] found id: ""
	I0617 12:06:44.560642  166103 logs.go:276] 1 containers: [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323]
	I0617 12:06:44.560695  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.564998  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:44.565068  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:44.600133  166103 cri.go:89] found id: "2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:44.600155  166103 cri.go:89] found id: ""
	I0617 12:06:44.600164  166103 logs.go:276] 1 containers: [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b]
	I0617 12:06:44.600220  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.605431  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:44.605494  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:44.648647  166103 cri.go:89] found id: "63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:44.648678  166103 cri.go:89] found id: ""
	I0617 12:06:44.648688  166103 logs.go:276] 1 containers: [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da]
	I0617 12:06:44.648758  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.653226  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:44.653307  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:44.701484  166103 cri.go:89] found id: "36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:44.701508  166103 cri.go:89] found id: ""
	I0617 12:06:44.701516  166103 logs.go:276] 1 containers: [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685]
	I0617 12:06:44.701572  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.707827  166103 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:44.707890  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:44.752362  166103 cri.go:89] found id: ""
	I0617 12:06:44.752391  166103 logs.go:276] 0 containers: []
	W0617 12:06:44.752402  166103 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:44.752410  166103 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:44.752473  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:44.798926  166103 cri.go:89] found id: "adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:44.798955  166103 cri.go:89] found id: "e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:44.798961  166103 cri.go:89] found id: ""
	I0617 12:06:44.798970  166103 logs.go:276] 2 containers: [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc]
	I0617 12:06:44.799038  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.804702  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:44.810673  166103 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:44.810702  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:44.939596  166103 logs.go:123] Gathering logs for etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] ...
	I0617 12:06:44.939627  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:44.987902  166103 logs.go:123] Gathering logs for coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] ...
	I0617 12:06:44.987936  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:45.023931  166103 logs.go:123] Gathering logs for kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] ...
	I0617 12:06:45.023962  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:45.060432  166103 logs.go:123] Gathering logs for storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] ...
	I0617 12:06:45.060468  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:45.095643  166103 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:45.095679  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:45.553973  166103 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:45.554018  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:45.611997  166103 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:45.612036  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:45.626973  166103 logs.go:123] Gathering logs for container status ...
	I0617 12:06:45.627002  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:45.671119  166103 logs.go:123] Gathering logs for kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] ...
	I0617 12:06:45.671151  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:45.728097  166103 logs.go:123] Gathering logs for storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] ...
	I0617 12:06:45.728133  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:45.765586  166103 logs.go:123] Gathering logs for kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] ...
	I0617 12:06:45.765615  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:45.818347  166103 logs.go:123] Gathering logs for kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] ...
	I0617 12:06:45.818387  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:43.941225  165698 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0617 12:06:43.941341  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:43.941612  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:06:44.331481  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:46.831820  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:48.362826  166103 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8444/healthz ...
	I0617 12:06:48.366936  166103 api_server.go:279] https://192.168.50.125:8444/healthz returned 200:
	ok
	I0617 12:06:48.367973  166103 api_server.go:141] control plane version: v1.30.1
	I0617 12:06:48.367992  166103 api_server.go:131] duration metric: took 3.940452539s to wait for apiserver health ...
	I0617 12:06:48.367999  166103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:06:48.368021  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:06:48.368066  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:06:48.404797  166103 cri.go:89] found id: "5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:48.404819  166103 cri.go:89] found id: ""
	I0617 12:06:48.404828  166103 logs.go:276] 1 containers: [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b]
	I0617 12:06:48.404887  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.409105  166103 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:06:48.409162  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:06:48.456233  166103 cri.go:89] found id: "8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:48.456266  166103 cri.go:89] found id: ""
	I0617 12:06:48.456277  166103 logs.go:276] 1 containers: [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862]
	I0617 12:06:48.456336  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.460550  166103 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:06:48.460625  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:06:48.498447  166103 cri.go:89] found id: "26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:48.498472  166103 cri.go:89] found id: ""
	I0617 12:06:48.498481  166103 logs.go:276] 1 containers: [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323]
	I0617 12:06:48.498564  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.503826  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:06:48.503906  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:06:48.554405  166103 cri.go:89] found id: "2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:48.554435  166103 cri.go:89] found id: ""
	I0617 12:06:48.554446  166103 logs.go:276] 1 containers: [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b]
	I0617 12:06:48.554504  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.559175  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:06:48.559240  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:06:48.596764  166103 cri.go:89] found id: "63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:48.596791  166103 cri.go:89] found id: ""
	I0617 12:06:48.596801  166103 logs.go:276] 1 containers: [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da]
	I0617 12:06:48.596863  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.601197  166103 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:06:48.601260  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:06:48.654027  166103 cri.go:89] found id: "36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:48.654053  166103 cri.go:89] found id: ""
	I0617 12:06:48.654061  166103 logs.go:276] 1 containers: [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685]
	I0617 12:06:48.654113  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.659492  166103 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:06:48.659557  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:06:48.706749  166103 cri.go:89] found id: ""
	I0617 12:06:48.706777  166103 logs.go:276] 0 containers: []
	W0617 12:06:48.706786  166103 logs.go:278] No container was found matching "kindnet"
	I0617 12:06:48.706794  166103 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:06:48.706859  166103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:06:48.750556  166103 cri.go:89] found id: "adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:48.750588  166103 cri.go:89] found id: "e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:48.750594  166103 cri.go:89] found id: ""
	I0617 12:06:48.750607  166103 logs.go:276] 2 containers: [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc]
	I0617 12:06:48.750671  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.755368  166103 ssh_runner.go:195] Run: which crictl
	I0617 12:06:48.760128  166103 logs.go:123] Gathering logs for kube-apiserver [5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b] ...
	I0617 12:06:48.760154  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b11bf1d6c96bc30e1f4bfcc63d526720845813e10554a9fdd50c6ad0ce2487b"
	I0617 12:06:48.802187  166103 logs.go:123] Gathering logs for etcd [8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862] ...
	I0617 12:06:48.802224  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bfeb1ae74a6b08574fa0aa5c5958f207833a29d2d97182b010a06b59c6d3862"
	I0617 12:06:48.861041  166103 logs.go:123] Gathering logs for kube-controller-manager [36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685] ...
	I0617 12:06:48.861076  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36ad2102b1a1318bc53a2d90679726a30194243a995d334d408441c498ce4685"
	I0617 12:06:48.917864  166103 logs.go:123] Gathering logs for storage-provisioner [e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc] ...
	I0617 12:06:48.917902  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1a38df1bc100577b510b5fd6941a1385ec92539b474bfcd418d7c4f3b1301dc"
	I0617 12:06:48.963069  166103 logs.go:123] Gathering logs for container status ...
	I0617 12:06:48.963099  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:06:49.012109  166103 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:06:49.012149  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:06:49.119880  166103 logs.go:123] Gathering logs for dmesg ...
	I0617 12:06:49.119915  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:06:49.136461  166103 logs.go:123] Gathering logs for coredns [26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323] ...
	I0617 12:06:49.136497  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26b8e036867db17071bc8bf8c34d903cd36173c710ede919db75efc0e081e323"
	I0617 12:06:49.177339  166103 logs.go:123] Gathering logs for kube-scheduler [2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b] ...
	I0617 12:06:49.177377  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fc9bd28673764af8c35d551e32d00c9bf522f3d291a6bf5ddbc4c6840550e6b"
	I0617 12:06:49.219101  166103 logs.go:123] Gathering logs for kube-proxy [63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da] ...
	I0617 12:06:49.219135  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dba5e023e5a5d0e3ecb59011524686124ee377a2d986f983bffc3d661d65da"
	I0617 12:06:49.256646  166103 logs.go:123] Gathering logs for storage-provisioner [adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195] ...
	I0617 12:06:49.256687  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb0f4294c844fe1b095435f33a3600c4e91bb767a247a67db3476df24514195"
	I0617 12:06:49.302208  166103 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:06:49.302243  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:06:49.653713  166103 logs.go:123] Gathering logs for kubelet ...
	I0617 12:06:49.653758  166103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:06:52.217069  166103 system_pods.go:59] 8 kube-system pods found
	I0617 12:06:52.217102  166103 system_pods.go:61] "coredns-7db6d8ff4d-mnw24" [1e6c4ff3-f0dc-43da-abd8-baaed7dca40c] Running
	I0617 12:06:52.217107  166103 system_pods.go:61] "etcd-default-k8s-diff-port-991309" [820a4f27-cf83-4edb-a2ea-edba6673d851] Running
	I0617 12:06:52.217111  166103 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991309" [26e6c19d-6f70-4924-83f5-563c8508c9e3] Running
	I0617 12:06:52.217115  166103 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991309" [01e7c468-98a6-48f3-a158-59e97fa8279c] Running
	I0617 12:06:52.217119  166103 system_pods.go:61] "kube-proxy-jn5kp" [d6935148-7ee8-4655-8327-9f1ee4c933de] Running
	I0617 12:06:52.217122  166103 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991309" [53ecd22c-05cf-48a5-b7e5-925392085f7a] Running
	I0617 12:06:52.217128  166103 system_pods.go:61] "metrics-server-569cc877fc-n2svp" [5b637d97-3183-4324-98cf-dd69a2968578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:52.217134  166103 system_pods.go:61] "storage-provisioner" [92b20aec-29c2-4256-86be-7f58f66585dd] Running
	I0617 12:06:52.217145  166103 system_pods.go:74] duration metric: took 3.849140024s to wait for pod list to return data ...
	I0617 12:06:52.217152  166103 default_sa.go:34] waiting for default service account to be created ...
	I0617 12:06:52.219308  166103 default_sa.go:45] found service account: "default"
	I0617 12:06:52.219330  166103 default_sa.go:55] duration metric: took 2.172323ms for default service account to be created ...
	I0617 12:06:52.219339  166103 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 12:06:52.224239  166103 system_pods.go:86] 8 kube-system pods found
	I0617 12:06:52.224265  166103 system_pods.go:89] "coredns-7db6d8ff4d-mnw24" [1e6c4ff3-f0dc-43da-abd8-baaed7dca40c] Running
	I0617 12:06:52.224270  166103 system_pods.go:89] "etcd-default-k8s-diff-port-991309" [820a4f27-cf83-4edb-a2ea-edba6673d851] Running
	I0617 12:06:52.224276  166103 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-991309" [26e6c19d-6f70-4924-83f5-563c8508c9e3] Running
	I0617 12:06:52.224280  166103 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-991309" [01e7c468-98a6-48f3-a158-59e97fa8279c] Running
	I0617 12:06:52.224284  166103 system_pods.go:89] "kube-proxy-jn5kp" [d6935148-7ee8-4655-8327-9f1ee4c933de] Running
	I0617 12:06:52.224288  166103 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-991309" [53ecd22c-05cf-48a5-b7e5-925392085f7a] Running
	I0617 12:06:52.224299  166103 system_pods.go:89] "metrics-server-569cc877fc-n2svp" [5b637d97-3183-4324-98cf-dd69a2968578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:06:52.224305  166103 system_pods.go:89] "storage-provisioner" [92b20aec-29c2-4256-86be-7f58f66585dd] Running
	I0617 12:06:52.224319  166103 system_pods.go:126] duration metric: took 4.973603ms to wait for k8s-apps to be running ...
	I0617 12:06:52.224332  166103 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 12:06:52.224380  166103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:06:52.241121  166103 system_svc.go:56] duration metric: took 16.776061ms WaitForService to wait for kubelet
	I0617 12:06:52.241156  166103 kubeadm.go:576] duration metric: took 4m27.066827271s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:06:52.241181  166103 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:06:52.245359  166103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:06:52.245407  166103 node_conditions.go:123] node cpu capacity is 2
	I0617 12:06:52.245423  166103 node_conditions.go:105] duration metric: took 4.235898ms to run NodePressure ...
	I0617 12:06:52.245440  166103 start.go:240] waiting for startup goroutines ...
	I0617 12:06:52.245449  166103 start.go:245] waiting for cluster config update ...
	I0617 12:06:52.245462  166103 start.go:254] writing updated cluster config ...
	I0617 12:06:52.245969  166103 ssh_runner.go:195] Run: rm -f paused
	I0617 12:06:52.299326  166103 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 12:06:52.301413  166103 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-991309" cluster and "default" namespace by default
	I0617 12:06:48.942159  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:48.942434  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:06:48.835113  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:51.331395  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:53.331551  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:55.332455  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:57.835143  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:06:58.942977  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:06:58.943290  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:00.331823  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:02.332214  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:04.831284  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:06.832082  164809 pod_ready.go:102] pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace has status "Ready":"False"
	I0617 12:07:07.325414  164809 pod_ready.go:81] duration metric: took 4m0.000322555s for pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace to be "Ready" ...
	E0617 12:07:07.325446  164809 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-97tqn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0617 12:07:07.325464  164809 pod_ready.go:38] duration metric: took 4m12.035995337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:07:07.325494  164809 kubeadm.go:591] duration metric: took 4m19.041266463s to restartPrimaryControlPlane
	W0617 12:07:07.325556  164809 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0617 12:07:07.325587  164809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:07:18.944149  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:07:18.944368  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:38.980378  164809 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.654762508s)
	I0617 12:07:38.980451  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:07:38.997845  164809 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 12:07:39.009456  164809 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:07:39.020407  164809 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:07:39.020430  164809 kubeadm.go:156] found existing configuration files:
	
	I0617 12:07:39.020472  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:07:39.030323  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:07:39.030376  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:07:39.040298  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:07:39.049715  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:07:39.049757  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:07:39.060493  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:07:39.069921  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:07:39.069973  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:07:39.080049  164809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:07:39.089524  164809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:07:39.089569  164809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:07:39.099082  164809 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:07:39.154963  164809 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0617 12:07:39.155083  164809 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:07:39.286616  164809 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:07:39.286809  164809 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:07:39.286977  164809 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:07:39.487542  164809 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:07:39.489554  164809 out.go:204]   - Generating certificates and keys ...
	I0617 12:07:39.489665  164809 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:07:39.489732  164809 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:07:39.489855  164809 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:07:39.489969  164809 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:07:39.490088  164809 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:07:39.490187  164809 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:07:39.490274  164809 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:07:39.490386  164809 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:07:39.490508  164809 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:07:39.490643  164809 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:07:39.490750  164809 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:07:39.490849  164809 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:07:39.565788  164809 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:07:39.643443  164809 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0617 12:07:39.765615  164809 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:07:39.851182  164809 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:07:40.041938  164809 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:07:40.042576  164809 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:07:40.045112  164809 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:07:40.047144  164809 out.go:204]   - Booting up control plane ...
	I0617 12:07:40.047265  164809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:07:40.047374  164809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:07:40.047995  164809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:07:40.070163  164809 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:07:40.071308  164809 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:07:40.071415  164809 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:07:40.204578  164809 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0617 12:07:40.204698  164809 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0617 12:07:41.210782  164809 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.0065421s
	I0617 12:07:41.210902  164809 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0617 12:07:45.713194  164809 kubeadm.go:309] [api-check] The API server is healthy after 4.501871798s
	I0617 12:07:45.735311  164809 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0617 12:07:45.760405  164809 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0617 12:07:45.795429  164809 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0617 12:07:45.795770  164809 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-152830 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0617 12:07:45.816446  164809 kubeadm.go:309] [bootstrap-token] Using token: ryfqxd.olkegn8a1unpvnbq
	I0617 12:07:45.817715  164809 out.go:204]   - Configuring RBAC rules ...
	I0617 12:07:45.817890  164809 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0617 12:07:45.826422  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0617 12:07:45.852291  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0617 12:07:45.867538  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0617 12:07:45.880697  164809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0617 12:07:45.887707  164809 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0617 12:07:46.120211  164809 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0617 12:07:46.593168  164809 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0617 12:07:47.119377  164809 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0617 12:07:47.120840  164809 kubeadm.go:309] 
	I0617 12:07:47.120933  164809 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0617 12:07:47.120947  164809 kubeadm.go:309] 
	I0617 12:07:47.121057  164809 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0617 12:07:47.121069  164809 kubeadm.go:309] 
	I0617 12:07:47.121123  164809 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0617 12:07:47.124361  164809 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0617 12:07:47.124443  164809 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0617 12:07:47.124464  164809 kubeadm.go:309] 
	I0617 12:07:47.124538  164809 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0617 12:07:47.124550  164809 kubeadm.go:309] 
	I0617 12:07:47.124607  164809 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0617 12:07:47.124617  164809 kubeadm.go:309] 
	I0617 12:07:47.124724  164809 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0617 12:07:47.124838  164809 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0617 12:07:47.124938  164809 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0617 12:07:47.124949  164809 kubeadm.go:309] 
	I0617 12:07:47.125085  164809 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0617 12:07:47.125191  164809 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0617 12:07:47.125203  164809 kubeadm.go:309] 
	I0617 12:07:47.125343  164809 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ryfqxd.olkegn8a1unpvnbq \
	I0617 12:07:47.125479  164809 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 \
	I0617 12:07:47.125510  164809 kubeadm.go:309] 	--control-plane 
	I0617 12:07:47.125518  164809 kubeadm.go:309] 
	I0617 12:07:47.125616  164809 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0617 12:07:47.125627  164809 kubeadm.go:309] 
	I0617 12:07:47.125724  164809 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ryfqxd.olkegn8a1unpvnbq \
	I0617 12:07:47.125852  164809 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a750c130b3df91ed6d57229f5a5d5a2ee0acd56a757f499599f368bc07dbf207 
	I0617 12:07:47.126915  164809 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:07:47.126966  164809 cni.go:84] Creating CNI manager for ""
	I0617 12:07:47.126983  164809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 12:07:47.128899  164809 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 12:07:47.130229  164809 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 12:07:47.142301  164809 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 12:07:47.163380  164809 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 12:07:47.163500  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:47.163503  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-152830 minikube.k8s.io/updated_at=2024_06_17T12_07_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6 minikube.k8s.io/name=no-preload-152830 minikube.k8s.io/primary=true
	I0617 12:07:47.375089  164809 ops.go:34] apiserver oom_adj: -16
	I0617 12:07:47.375266  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:47.875477  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:48.375626  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:48.876185  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:49.375621  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:49.875597  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:50.376188  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:50.875983  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:51.375537  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:51.876321  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:52.375920  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:52.876348  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:53.375623  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:53.875369  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:54.375747  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:54.875581  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:55.376244  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:55.875866  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:56.376285  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:56.876228  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:57.375990  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:57.875392  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:58.946943  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:07:58.947220  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:07:58.947233  165698 kubeadm.go:309] 
	I0617 12:07:58.947316  165698 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0617 12:07:58.947393  165698 kubeadm.go:309] 		timed out waiting for the condition
	I0617 12:07:58.947406  165698 kubeadm.go:309] 
	I0617 12:07:58.947449  165698 kubeadm.go:309] 	This error is likely caused by:
	I0617 12:07:58.947528  165698 kubeadm.go:309] 		- The kubelet is not running
	I0617 12:07:58.947690  165698 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0617 12:07:58.947699  165698 kubeadm.go:309] 
	I0617 12:07:58.947860  165698 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0617 12:07:58.947924  165698 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0617 12:07:58.947976  165698 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0617 12:07:58.947991  165698 kubeadm.go:309] 
	I0617 12:07:58.948132  165698 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0617 12:07:58.948247  165698 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0617 12:07:58.948260  165698 kubeadm.go:309] 
	I0617 12:07:58.948406  165698 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0617 12:07:58.948539  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0617 12:07:58.948639  165698 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0617 12:07:58.948740  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0617 12:07:58.948750  165698 kubeadm.go:309] 
	I0617 12:07:58.949270  165698 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:07:58.949403  165698 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0617 12:07:58.949508  165698 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0617 12:07:58.949630  165698 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0617 12:07:58.949694  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0617 12:07:59.418622  165698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:07:59.435367  165698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 12:07:59.449365  165698 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 12:07:59.449384  165698 kubeadm.go:156] found existing configuration files:
	
	I0617 12:07:59.449430  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 12:07:59.461411  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 12:07:59.461478  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 12:07:59.471262  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 12:07:59.480591  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 12:07:59.480640  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 12:07:59.490152  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 12:07:59.499248  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 12:07:59.499300  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 12:07:59.508891  165698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 12:07:59.518114  165698 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 12:07:59.518152  165698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 12:07:59.528190  165698 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 12:07:59.592831  165698 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0617 12:07:59.592949  165698 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 12:07:59.752802  165698 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 12:07:59.752947  165698 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 12:07:59.753079  165698 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 12:07:59.984221  165698 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 12:07:58.375522  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:58.876221  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:59.375941  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:07:59.875924  164809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 12:08:00.063788  164809 kubeadm.go:1107] duration metric: took 12.900376954s to wait for elevateKubeSystemPrivileges
	W0617 12:08:00.063860  164809 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0617 12:08:00.063871  164809 kubeadm.go:393] duration metric: took 5m11.831587226s to StartCluster
	I0617 12:08:00.063895  164809 settings.go:142] acquiring lock: {Name:mkf6da6d5dcdf32cef469c2b75da17d11fa1e39e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:08:00.063996  164809 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 12:08:00.066593  164809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-112967/kubeconfig: {Name:mkf81bd1831c0194f784e5c176b265c5061bea5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:08:00.066922  164809 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0617 12:08:00.068556  164809 out.go:177] * Verifying Kubernetes components...
	I0617 12:08:00.067029  164809 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:08:00.067131  164809 config.go:182] Loaded profile config "no-preload-152830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 12:08:00.069969  164809 addons.go:69] Setting storage-provisioner=true in profile "no-preload-152830"
	I0617 12:08:00.069983  164809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:08:00.069992  164809 addons.go:69] Setting metrics-server=true in profile "no-preload-152830"
	I0617 12:08:00.070015  164809 addons.go:234] Setting addon metrics-server=true in "no-preload-152830"
	I0617 12:08:00.070014  164809 addons.go:234] Setting addon storage-provisioner=true in "no-preload-152830"
	W0617 12:08:00.070021  164809 addons.go:243] addon metrics-server should already be in state true
	W0617 12:08:00.070024  164809 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:08:00.070055  164809 host.go:66] Checking if "no-preload-152830" exists ...
	I0617 12:08:00.070057  164809 host.go:66] Checking if "no-preload-152830" exists ...
	I0617 12:08:00.069984  164809 addons.go:69] Setting default-storageclass=true in profile "no-preload-152830"
	I0617 12:08:00.070116  164809 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-152830"
	I0617 12:08:00.070426  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.070428  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.070443  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.070451  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.070475  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.070494  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.088451  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I0617 12:08:00.089105  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.089673  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.089700  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.090074  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.090673  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.090723  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.091118  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0617 12:08:00.091150  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I0617 12:08:00.091756  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.091880  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.092306  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.092327  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.092470  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.092487  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.093006  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.093081  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.093169  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.093683  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.093722  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.096819  164809 addons.go:234] Setting addon default-storageclass=true in "no-preload-152830"
	W0617 12:08:00.096839  164809 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:08:00.096868  164809 host.go:66] Checking if "no-preload-152830" exists ...
	I0617 12:08:00.097223  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.097252  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.110063  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33623
	I0617 12:08:00.110843  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.111489  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.111509  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.112419  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.112633  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.112859  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39555
	I0617 12:08:00.113245  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.113927  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.113946  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.114470  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.114758  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:08:00.116377  164809 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:08:00.115146  164809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 12:08:00.117266  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I0617 12:08:00.117647  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:08:00.117663  164809 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:08:00.117674  164809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 12:08:00.117681  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:08:00.118504  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.119076  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.119091  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.119440  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.119755  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.121396  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.121620  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:08:00.123146  164809 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:07:59.986165  165698 out.go:204]   - Generating certificates and keys ...
	I0617 12:07:59.986270  165698 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 12:07:59.986391  165698 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 12:07:59.986522  165698 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 12:07:59.986606  165698 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 12:07:59.986717  165698 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 12:07:59.986795  165698 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 12:07:59.986887  165698 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 12:07:59.986972  165698 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 12:07:59.987081  165698 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 12:07:59.987191  165698 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 12:07:59.987250  165698 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 12:07:59.987331  165698 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 12:08:00.155668  165698 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 12:08:00.303780  165698 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 12:08:00.369907  165698 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 12:08:00.506550  165698 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 12:08:00.529943  165698 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 12:08:00.531684  165698 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 12:08:00.531756  165698 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 12:08:00.667972  165698 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 12:08:00.122003  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:08:00.122146  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:08:00.124748  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.124895  164809 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:08:00.124914  164809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:08:00.124934  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:08:00.124957  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:08:00.125142  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:08:00.125446  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:08:00.128559  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.128991  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:08:00.129011  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.129239  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:08:00.129434  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:08:00.129537  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:08:00.129640  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:08:00.142435  164809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39073
	I0617 12:08:00.142915  164809 main.go:141] libmachine: () Calling .GetVersion
	I0617 12:08:00.143550  164809 main.go:141] libmachine: Using API Version  1
	I0617 12:08:00.143583  164809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 12:08:00.143946  164809 main.go:141] libmachine: () Calling .GetMachineName
	I0617 12:08:00.144168  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetState
	I0617 12:08:00.145972  164809 main.go:141] libmachine: (no-preload-152830) Calling .DriverName
	I0617 12:08:00.146165  164809 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:08:00.146178  164809 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:08:00.146196  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHHostname
	I0617 12:08:00.149316  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.149720  164809 main.go:141] libmachine: (no-preload-152830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1a:fb", ip: ""} in network mk-no-preload-152830: {Iface:virbr2 ExpiryTime:2024-06-17 13:02:21 +0000 UTC Type:0 Mac:52:54:00:c0:1a:fb Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:no-preload-152830 Clientid:01:52:54:00:c0:1a:fb}
	I0617 12:08:00.149743  164809 main.go:141] libmachine: (no-preload-152830) DBG | domain no-preload-152830 has defined IP address 192.168.39.173 and MAC address 52:54:00:c0:1a:fb in network mk-no-preload-152830
	I0617 12:08:00.149926  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHPort
	I0617 12:08:00.150106  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHKeyPath
	I0617 12:08:00.150273  164809 main.go:141] libmachine: (no-preload-152830) Calling .GetSSHUsername
	I0617 12:08:00.150434  164809 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/no-preload-152830/id_rsa Username:docker}
	I0617 12:08:00.294731  164809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:08:00.317727  164809 node_ready.go:35] waiting up to 6m0s for node "no-preload-152830" to be "Ready" ...
	I0617 12:08:00.346507  164809 node_ready.go:49] node "no-preload-152830" has status "Ready":"True"
	I0617 12:08:00.346533  164809 node_ready.go:38] duration metric: took 28.776898ms for node "no-preload-152830" to be "Ready" ...
	I0617 12:08:00.346544  164809 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:08:00.404097  164809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:00.412303  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:08:00.412325  164809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:08:00.415269  164809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:08:00.438024  164809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:08:00.514528  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:08:00.514561  164809 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:08:00.629109  164809 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:08:00.629141  164809 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:08:00.677084  164809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:08:01.113979  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.114007  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.114432  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.114445  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.114507  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.114526  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.114536  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.114846  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.114866  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.117124  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.117141  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.117437  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.117457  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.117478  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.117496  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.117508  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.117821  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.117858  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.117882  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.125648  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.125668  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.125998  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.126020  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.126030  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.325217  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.325242  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.325579  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.325633  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.325669  164809 main.go:141] libmachine: Making call to close driver server
	I0617 12:08:01.325669  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.325682  164809 main.go:141] libmachine: (no-preload-152830) Calling .Close
	I0617 12:08:01.325960  164809 main.go:141] libmachine: Successfully made call to close driver server
	I0617 12:08:01.325977  164809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0617 12:08:01.326007  164809 addons.go:475] Verifying addon metrics-server=true in "no-preload-152830"
	I0617 12:08:01.326037  164809 main.go:141] libmachine: (no-preload-152830) DBG | Closing plugin on server side
	I0617 12:08:01.327744  164809 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0617 12:08:00.671036  165698 out.go:204]   - Booting up control plane ...
	I0617 12:08:00.671171  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 12:08:00.677241  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 12:08:00.678999  165698 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 12:08:00.681119  165698 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 12:08:00.684535  165698 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 12:08:01.329155  164809 addons.go:510] duration metric: took 1.262127108s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0617 12:08:02.425731  164809 pod_ready.go:102] pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace has status "Ready":"False"
	I0617 12:08:03.910467  164809 pod_ready.go:92] pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.910494  164809 pod_ready.go:81] duration metric: took 3.506370946s for pod "coredns-7db6d8ff4d-gjt84" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.910508  164809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vz7dg" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.916309  164809 pod_ready.go:92] pod "coredns-7db6d8ff4d-vz7dg" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.916331  164809 pod_ready.go:81] duration metric: took 5.814812ms for pod "coredns-7db6d8ff4d-vz7dg" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.916340  164809 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.920834  164809 pod_ready.go:92] pod "etcd-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.920862  164809 pod_ready.go:81] duration metric: took 4.51438ms for pod "etcd-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.920874  164809 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.924955  164809 pod_ready.go:92] pod "kube-apiserver-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.924973  164809 pod_ready.go:81] duration metric: took 4.09301ms for pod "kube-apiserver-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.924982  164809 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.929301  164809 pod_ready.go:92] pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:03.929318  164809 pod_ready.go:81] duration metric: took 4.33061ms for pod "kube-controller-manager-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:03.929326  164809 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:04.308546  164809 pod_ready.go:92] pod "kube-scheduler-no-preload-152830" in "kube-system" namespace has status "Ready":"True"
	I0617 12:08:04.308570  164809 pod_ready.go:81] duration metric: took 379.237147ms for pod "kube-scheduler-no-preload-152830" in "kube-system" namespace to be "Ready" ...
	I0617 12:08:04.308578  164809 pod_ready.go:38] duration metric: took 3.962022714s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:08:04.308594  164809 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:08:04.308644  164809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:08:04.327383  164809 api_server.go:72] duration metric: took 4.260420928s to wait for apiserver process to appear ...
	I0617 12:08:04.327408  164809 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:08:04.327426  164809 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0617 12:08:04.332321  164809 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0617 12:08:04.333390  164809 api_server.go:141] control plane version: v1.30.1
	I0617 12:08:04.333412  164809 api_server.go:131] duration metric: took 5.998312ms to wait for apiserver health ...
	I0617 12:08:04.333420  164809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 12:08:04.512267  164809 system_pods.go:59] 9 kube-system pods found
	I0617 12:08:04.512298  164809 system_pods.go:61] "coredns-7db6d8ff4d-gjt84" [979c7339-3a4c-4bc8-8586-4d9da42339ae] Running
	I0617 12:08:04.512302  164809 system_pods.go:61] "coredns-7db6d8ff4d-vz7dg" [53c5188e-bc44-4aed-a989-ef3e2379c27b] Running
	I0617 12:08:04.512306  164809 system_pods.go:61] "etcd-no-preload-152830" [2b82d709-0776-470a-a538-f132b84be2e0] Running
	I0617 12:08:04.512310  164809 system_pods.go:61] "kube-apiserver-no-preload-152830" [e40c7c7b-b029-4f65-ac36-f4ff95eabc23] Running
	I0617 12:08:04.512313  164809 system_pods.go:61] "kube-controller-manager-no-preload-152830" [c2adec58-05a4-4993-b9a3-28f9ef519a63] Running
	I0617 12:08:04.512317  164809 system_pods.go:61] "kube-proxy-6c4hm" [a9830236-af96-437f-ad07-494b25f1a90e] Running
	I0617 12:08:04.512319  164809 system_pods.go:61] "kube-scheduler-no-preload-152830" [876671da-097b-43c1-9055-95c2ed7620aa] Running
	I0617 12:08:04.512325  164809 system_pods.go:61] "metrics-server-569cc877fc-zllzk" [e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:08:04.512329  164809 system_pods.go:61] "storage-provisioner" [b6cc7cdc-43f4-40c4-a202-5674fcdcedd0] Running
	I0617 12:08:04.512340  164809 system_pods.go:74] duration metric: took 178.914377ms to wait for pod list to return data ...
	I0617 12:08:04.512347  164809 default_sa.go:34] waiting for default service account to be created ...
	I0617 12:08:04.707834  164809 default_sa.go:45] found service account: "default"
	I0617 12:08:04.707874  164809 default_sa.go:55] duration metric: took 195.518331ms for default service account to be created ...
	I0617 12:08:04.707886  164809 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 12:08:04.916143  164809 system_pods.go:86] 9 kube-system pods found
	I0617 12:08:04.916173  164809 system_pods.go:89] "coredns-7db6d8ff4d-gjt84" [979c7339-3a4c-4bc8-8586-4d9da42339ae] Running
	I0617 12:08:04.916178  164809 system_pods.go:89] "coredns-7db6d8ff4d-vz7dg" [53c5188e-bc44-4aed-a989-ef3e2379c27b] Running
	I0617 12:08:04.916183  164809 system_pods.go:89] "etcd-no-preload-152830" [2b82d709-0776-470a-a538-f132b84be2e0] Running
	I0617 12:08:04.916187  164809 system_pods.go:89] "kube-apiserver-no-preload-152830" [e40c7c7b-b029-4f65-ac36-f4ff95eabc23] Running
	I0617 12:08:04.916191  164809 system_pods.go:89] "kube-controller-manager-no-preload-152830" [c2adec58-05a4-4993-b9a3-28f9ef519a63] Running
	I0617 12:08:04.916195  164809 system_pods.go:89] "kube-proxy-6c4hm" [a9830236-af96-437f-ad07-494b25f1a90e] Running
	I0617 12:08:04.916199  164809 system_pods.go:89] "kube-scheduler-no-preload-152830" [876671da-097b-43c1-9055-95c2ed7620aa] Running
	I0617 12:08:04.916211  164809 system_pods.go:89] "metrics-server-569cc877fc-zllzk" [e5ad3527-a3d7-49e9-b2b0-fdea32a84bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0617 12:08:04.916219  164809 system_pods.go:89] "storage-provisioner" [b6cc7cdc-43f4-40c4-a202-5674fcdcedd0] Running
	I0617 12:08:04.916231  164809 system_pods.go:126] duration metric: took 208.336851ms to wait for k8s-apps to be running ...
	I0617 12:08:04.916245  164809 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 12:08:04.916306  164809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:08:04.933106  164809 system_svc.go:56] duration metric: took 16.850122ms WaitForService to wait for kubelet
	I0617 12:08:04.933135  164809 kubeadm.go:576] duration metric: took 4.866178671s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:08:04.933159  164809 node_conditions.go:102] verifying NodePressure condition ...
	I0617 12:08:05.108094  164809 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0617 12:08:05.108120  164809 node_conditions.go:123] node cpu capacity is 2
	I0617 12:08:05.108133  164809 node_conditions.go:105] duration metric: took 174.968414ms to run NodePressure ...
	I0617 12:08:05.108148  164809 start.go:240] waiting for startup goroutines ...
	I0617 12:08:05.108160  164809 start.go:245] waiting for cluster config update ...
	I0617 12:08:05.108173  164809 start.go:254] writing updated cluster config ...
	I0617 12:08:05.108496  164809 ssh_runner.go:195] Run: rm -f paused
	I0617 12:08:05.160610  164809 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 12:08:05.162777  164809 out.go:177] * Done! kubectl is now configured to use "no-preload-152830" cluster and "default" namespace by default
	I0617 12:08:40.686610  165698 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0617 12:08:40.686950  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:40.687194  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:08:45.687594  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:45.687820  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:08:55.688285  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:08:55.688516  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:15.689306  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:09:15.689556  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:55.688872  165698 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0617 12:09:55.689162  165698 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0617 12:09:55.689206  165698 kubeadm.go:309] 
	I0617 12:09:55.689284  165698 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0617 12:09:55.689342  165698 kubeadm.go:309] 		timed out waiting for the condition
	I0617 12:09:55.689354  165698 kubeadm.go:309] 
	I0617 12:09:55.689418  165698 kubeadm.go:309] 	This error is likely caused by:
	I0617 12:09:55.689480  165698 kubeadm.go:309] 		- The kubelet is not running
	I0617 12:09:55.689632  165698 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0617 12:09:55.689657  165698 kubeadm.go:309] 
	I0617 12:09:55.689791  165698 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0617 12:09:55.689844  165698 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0617 12:09:55.689916  165698 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0617 12:09:55.689926  165698 kubeadm.go:309] 
	I0617 12:09:55.690059  165698 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0617 12:09:55.690140  165698 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0617 12:09:55.690159  165698 kubeadm.go:309] 
	I0617 12:09:55.690258  165698 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0617 12:09:55.690343  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0617 12:09:55.690434  165698 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0617 12:09:55.690530  165698 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0617 12:09:55.690546  165698 kubeadm.go:309] 
	I0617 12:09:55.691495  165698 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 12:09:55.691595  165698 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0617 12:09:55.691708  165698 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0617 12:09:55.691787  165698 kubeadm.go:393] duration metric: took 7m57.151326537s to StartCluster
	I0617 12:09:55.691844  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:09:55.691904  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:09:55.746514  165698 cri.go:89] found id: ""
	I0617 12:09:55.746550  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.746563  165698 logs.go:278] No container was found matching "kube-apiserver"
	I0617 12:09:55.746572  165698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0617 12:09:55.746636  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:09:55.789045  165698 cri.go:89] found id: ""
	I0617 12:09:55.789083  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.789095  165698 logs.go:278] No container was found matching "etcd"
	I0617 12:09:55.789103  165698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0617 12:09:55.789169  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:09:55.829492  165698 cri.go:89] found id: ""
	I0617 12:09:55.829533  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.829542  165698 logs.go:278] No container was found matching "coredns"
	I0617 12:09:55.829547  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:09:55.829614  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:09:55.865213  165698 cri.go:89] found id: ""
	I0617 12:09:55.865246  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.865262  165698 logs.go:278] No container was found matching "kube-scheduler"
	I0617 12:09:55.865267  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:09:55.865318  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:09:55.904067  165698 cri.go:89] found id: ""
	I0617 12:09:55.904102  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.904113  165698 logs.go:278] No container was found matching "kube-proxy"
	I0617 12:09:55.904122  165698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:09:55.904187  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:09:55.938441  165698 cri.go:89] found id: ""
	I0617 12:09:55.938471  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.938478  165698 logs.go:278] No container was found matching "kube-controller-manager"
	I0617 12:09:55.938487  165698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0617 12:09:55.938538  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:09:55.975669  165698 cri.go:89] found id: ""
	I0617 12:09:55.975710  165698 logs.go:276] 0 containers: []
	W0617 12:09:55.975723  165698 logs.go:278] No container was found matching "kindnet"
	I0617 12:09:55.975731  165698 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:09:55.975804  165698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:09:56.015794  165698 cri.go:89] found id: ""
	I0617 12:09:56.015826  165698 logs.go:276] 0 containers: []
	W0617 12:09:56.015837  165698 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0617 12:09:56.015851  165698 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:09:56.015868  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0617 12:09:56.095533  165698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0617 12:09:56.095557  165698 logs.go:123] Gathering logs for CRI-O ...
	I0617 12:09:56.095573  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0617 12:09:56.220817  165698 logs.go:123] Gathering logs for container status ...
	I0617 12:09:56.220857  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:09:56.261470  165698 logs.go:123] Gathering logs for kubelet ...
	I0617 12:09:56.261507  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 12:09:56.325626  165698 logs.go:123] Gathering logs for dmesg ...
	I0617 12:09:56.325673  165698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0617 12:09:56.345438  165698 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0617 12:09:56.345491  165698 out.go:239] * 
	W0617 12:09:56.345606  165698 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0617 12:09:56.345635  165698 out.go:239] * 
	W0617 12:09:56.346583  165698 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 12:09:56.349928  165698 out.go:177] 
	W0617 12:09:56.351067  165698 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0617 12:09:56.351127  165698 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0617 12:09:56.351157  165698 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0617 12:09:56.352487  165698 out.go:177] 
	
	
	==> CRI-O <==
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.552817736Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626920552785335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c0999c9-fd5a-40ef-beb1-ac2056068eac name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.553460263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56d3adce-0011-4b56-9f7c-5e8bb39bff52 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.553525514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56d3adce-0011-4b56-9f7c-5e8bb39bff52 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.553565891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=56d3adce-0011-4b56-9f7c-5e8bb39bff52 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.589371518Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=93a642a0-d0f3-4dc4-b5a3-9165a9e2a869 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.589465268Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=93a642a0-d0f3-4dc4-b5a3-9165a9e2a869 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.590821183Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d351e49b-5d88-4c93-81c2-c230d4b18861 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.591370816Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626920591339220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d351e49b-5d88-4c93-81c2-c230d4b18861 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.591806554Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e658724-0e0e-484c-997c-05e756637802 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.591872471Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e658724-0e0e-484c-997c-05e756637802 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.591910470Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3e658724-0e0e-484c-997c-05e756637802 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.625232669Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8008ee9-c365-46a0-beb8-e3aae302aa21 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.625332132Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8008ee9-c365-46a0-beb8-e3aae302aa21 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.626434223Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13c45ca5-73b2-451b-b382-84d2554034f6 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.626902105Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626920626868790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13c45ca5-73b2-451b-b382-84d2554034f6 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.627528721Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=922f9855-0d8a-4618-90dc-23d3616d28d4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.627640641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=922f9855-0d8a-4618-90dc-23d3616d28d4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.627697530Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=922f9855-0d8a-4618-90dc-23d3616d28d4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.661329494Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b300b952-00b2-4fd3-b0a0-1f102bb62de6 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.661419279Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b300b952-00b2-4fd3-b0a0-1f102bb62de6 name=/runtime.v1.RuntimeService/Version
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.662708421Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dbcf9f33-1d07-4a76-ad3b-0bce69c6ddc3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.663168438Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718626920663144024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dbcf9f33-1d07-4a76-ad3b-0bce69c6ddc3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.663847991Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=035b5fc2-276f-4be7-b313-2c1c8ed64b51 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.663910992Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=035b5fc2-276f-4be7-b313-2c1c8ed64b51 name=/runtime.v1.RuntimeService/ListContainers
	Jun 17 12:22:00 old-k8s-version-003661 crio[648]: time="2024-06-17 12:22:00.663943245Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=035b5fc2-276f-4be7-b313-2c1c8ed64b51 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jun17 12:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052255] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040891] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.660385] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.359181] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.617809] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.763068] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.058957] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067517] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.195874] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.192469] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.318746] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.241976] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.062935] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.770270] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	[Jun17 12:02] kauditd_printk_skb: 46 callbacks suppressed
	[Jun17 12:06] systemd-fstab-generator[5023]: Ignoring "noauto" option for root device
	[Jun17 12:08] systemd-fstab-generator[5303]: Ignoring "noauto" option for root device
	[  +0.068765] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:22:00 up 20 min,  0 users,  load average: 0.04, 0.06, 0.03
	Linux old-k8s-version-003661 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6858]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc0009e4360)
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6858]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6858]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6858]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6858]: goroutine 153 [select]:
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6858]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a25ef0, 0x4f0ac20, 0xc000bcbef0, 0x1, 0xc0001020c0)
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6858]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6858]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0002647e0, 0xc0001020c0)
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6858]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6858]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6858]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6858]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0009e2320, 0xc0009d6920)
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6858]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6858]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6858]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jun 17 12:21:55 old-k8s-version-003661 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 17 12:21:55 old-k8s-version-003661 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 17 12:21:55 old-k8s-version-003661 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 145.
	Jun 17 12:21:55 old-k8s-version-003661 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 17 12:21:55 old-k8s-version-003661 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6868]: I0617 12:21:55.884682    6868 server.go:416] Version: v1.20.0
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6868]: I0617 12:21:55.884993    6868 server.go:837] Client rotation is on, will bootstrap in background
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6868]: I0617 12:21:55.887103    6868 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6868]: W0617 12:21:55.888370    6868 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jun 17 12:21:55 old-k8s-version-003661 kubelet[6868]: I0617 12:21:55.888487    6868 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-003661 -n old-k8s-version-003661
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-003661 -n old-k8s-version-003661: exit status 2 (225.14563ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-003661" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (179.10s)

                                                
                                    

Test pass (245/314)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.57
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.1/json-events 4.66
13 TestDownloadOnly/v1.30.1/preload-exists 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.06
18 TestDownloadOnly/v1.30.1/DeleteAll 0.13
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.56
22 TestOffline 62.57
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 143.25
29 TestAddons/parallel/Registry 16.02
31 TestAddons/parallel/InspektorGadget 10.78
33 TestAddons/parallel/HelmTiller 10.85
35 TestAddons/parallel/CSI 78.24
36 TestAddons/parallel/Headlamp 15.32
37 TestAddons/parallel/CloudSpanner 6.91
38 TestAddons/parallel/LocalPath 61.38
39 TestAddons/parallel/NvidiaDevicePlugin 6.74
40 TestAddons/parallel/Yakd 6
44 TestAddons/serial/GCPAuth/Namespaces 0.12
46 TestCertOptions 89.95
47 TestCertExpiration 328.02
49 TestForceSystemdFlag 44.11
50 TestForceSystemdEnv 71.34
52 TestKVMDriverInstallOrUpdate 1.1
56 TestErrorSpam/setup 43.24
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.75
59 TestErrorSpam/pause 1.58
60 TestErrorSpam/unpause 1.59
61 TestErrorSpam/stop 5.79
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 54.17
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 39.79
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.25
73 TestFunctional/serial/CacheCmd/cache/add_local 0.99
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 32.05
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.44
84 TestFunctional/serial/LogsFileCmd 1.42
85 TestFunctional/serial/InvalidService 4.39
87 TestFunctional/parallel/ConfigCmd 0.36
88 TestFunctional/parallel/DashboardCmd 28.79
89 TestFunctional/parallel/DryRun 0.34
90 TestFunctional/parallel/InternationalLanguage 0.16
91 TestFunctional/parallel/StatusCmd 1.21
95 TestFunctional/parallel/ServiceCmdConnect 7.57
96 TestFunctional/parallel/AddonsCmd 0.11
97 TestFunctional/parallel/PersistentVolumeClaim 29.44
99 TestFunctional/parallel/SSHCmd 0.47
100 TestFunctional/parallel/CpCmd 1.42
101 TestFunctional/parallel/MySQL 26
102 TestFunctional/parallel/FileSync 0.22
103 TestFunctional/parallel/CertSync 1.4
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
111 TestFunctional/parallel/License 0.22
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
113 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
114 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
116 TestFunctional/parallel/ProfileCmd/profile_list 0.31
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.67
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.82
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
123 TestFunctional/parallel/ImageCommands/Setup 1.09
124 TestFunctional/parallel/Version/short 0.05
125 TestFunctional/parallel/Version/components 0.49
126 TestFunctional/parallel/MountCmd/any-port 19.14
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.22
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.09
130 TestFunctional/parallel/MountCmd/specific-port 2.17
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.09
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.83
142 TestFunctional/parallel/ServiceCmd/DeployApp 11.39
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 4.16
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.29
146 TestFunctional/parallel/ServiceCmd/List 1.3
147 TestFunctional/parallel/ServiceCmd/JSONOutput 1.28
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
149 TestFunctional/parallel/ServiceCmd/Format 0.27
150 TestFunctional/parallel/ServiceCmd/URL 0.27
151 TestFunctional/delete_addon-resizer_images 0.06
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 203.24
158 TestMultiControlPlane/serial/DeployApp 4.83
159 TestMultiControlPlane/serial/PingHostFromPods 1.23
160 TestMultiControlPlane/serial/AddWorkerNode 43.96
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
163 TestMultiControlPlane/serial/CopyFile 12.43
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
169 TestMultiControlPlane/serial/DeleteSecondaryNode 17.31
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/RestartCluster 347.89
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
174 TestMultiControlPlane/serial/AddSecondaryNode 72.6
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
179 TestJSONOutput/start/Command 63.37
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.68
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.62
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.38
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.18
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 84.41
211 TestMountStart/serial/StartWithMountFirst 24.23
212 TestMountStart/serial/VerifyMountFirst 0.37
213 TestMountStart/serial/StartWithMountSecond 26.73
214 TestMountStart/serial/VerifyMountSecond 0.37
215 TestMountStart/serial/DeleteFirst 0.67
216 TestMountStart/serial/VerifyMountPostDelete 0.37
217 TestMountStart/serial/Stop 1.28
218 TestMountStart/serial/RestartStopped 20.73
219 TestMountStart/serial/VerifyMountPostStop 0.37
222 TestMultiNode/serial/FreshStart2Nodes 99.29
223 TestMultiNode/serial/DeployApp2Nodes 3.55
224 TestMultiNode/serial/PingHostFrom2Pods 0.81
225 TestMultiNode/serial/AddNode 36.03
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 7.24
229 TestMultiNode/serial/StopNode 2.36
230 TestMultiNode/serial/StartAfterStop 27.17
232 TestMultiNode/serial/DeleteNode 2.14
234 TestMultiNode/serial/RestartMultiNode 160.17
235 TestMultiNode/serial/ValidateNameConflict 45.71
242 TestScheduledStopUnix 114.36
246 TestRunningBinaryUpgrade 215.14
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
252 TestNoKubernetes/serial/StartWithK8s 93.77
261 TestPause/serial/Start 96.12
269 TestNetworkPlugins/group/false 2.99
273 TestNoKubernetes/serial/StartWithStopK8s 64.66
274 TestNoKubernetes/serial/Start 27.66
276 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
277 TestNoKubernetes/serial/ProfileList 32.89
278 TestNoKubernetes/serial/Stop 1.37
279 TestNoKubernetes/serial/StartNoArgs 41.4
280 TestStoppedBinaryUpgrade/Setup 0.51
281 TestStoppedBinaryUpgrade/Upgrade 124.02
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
285 TestStoppedBinaryUpgrade/MinikubeLogs 0.86
287 TestStartStop/group/no-preload/serial/FirstStart 81.44
289 TestStartStop/group/embed-certs/serial/FirstStart 67.99
290 TestStartStop/group/no-preload/serial/DeployApp 9.29
291 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
292 TestStartStop/group/embed-certs/serial/DeployApp 9.29
294 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
297 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.89
300 TestStartStop/group/no-preload/serial/SecondStart 692.11
303 TestStartStop/group/embed-certs/serial/SecondStart 565.55
304 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
307 TestStartStop/group/old-k8s-version/serial/Stop 1.38
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
311 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 435.2
321 TestStartStop/group/newest-cni/serial/FirstStart 60.6
322 TestNetworkPlugins/group/auto/Start 63.38
323 TestNetworkPlugins/group/kindnet/Start 67.11
324 TestStartStop/group/newest-cni/serial/DeployApp 0
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.77
326 TestStartStop/group/newest-cni/serial/Stop 7.33
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
328 TestStartStop/group/newest-cni/serial/SecondStart 50.38
329 TestNetworkPlugins/group/auto/KubeletFlags 0.24
330 TestNetworkPlugins/group/auto/NetCatPod 11.3
331 TestNetworkPlugins/group/auto/DNS 0.22
332 TestNetworkPlugins/group/auto/Localhost 0.2
333 TestNetworkPlugins/group/auto/HairPin 0.14
334 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
335 TestNetworkPlugins/group/calico/Start 84.48
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
339 TestStartStop/group/newest-cni/serial/Pause 2.54
340 TestNetworkPlugins/group/custom-flannel/Start 98.24
341 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
342 TestNetworkPlugins/group/kindnet/NetCatPod 10.22
343 TestNetworkPlugins/group/kindnet/DNS 0.15
344 TestNetworkPlugins/group/kindnet/Localhost 0.14
345 TestNetworkPlugins/group/kindnet/HairPin 0.13
346 TestNetworkPlugins/group/bridge/Start 86.46
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.44
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.58
349 TestNetworkPlugins/group/flannel/Start 97.34
350 TestNetworkPlugins/group/calico/ControllerPod 6.01
351 TestNetworkPlugins/group/calico/KubeletFlags 0.22
352 TestNetworkPlugins/group/calico/NetCatPod 10.27
353 TestNetworkPlugins/group/calico/DNS 0.21
354 TestNetworkPlugins/group/calico/Localhost 0.17
355 TestNetworkPlugins/group/calico/HairPin 0.14
356 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
357 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.28
358 TestNetworkPlugins/group/custom-flannel/DNS 0.2
359 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
360 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
361 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
362 TestNetworkPlugins/group/enable-default-cni/Start 69.97
363 TestNetworkPlugins/group/bridge/NetCatPod 11.31
364 TestNetworkPlugins/group/bridge/DNS 0.2
365 TestNetworkPlugins/group/bridge/Localhost 0.19
366 TestNetworkPlugins/group/bridge/HairPin 0.19
367 TestNetworkPlugins/group/flannel/ControllerPod 6.01
368 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
369 TestNetworkPlugins/group/flannel/NetCatPod 11.22
370 TestNetworkPlugins/group/flannel/DNS 0.15
371 TestNetworkPlugins/group/flannel/Localhost 0.12
372 TestNetworkPlugins/group/flannel/HairPin 0.12
373 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
374 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.25
375 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
376 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
377 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (7.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-033984 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-033984 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.573842172s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-033984
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-033984: exit status 85 (60.798115ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-033984 | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC |          |
	|         | -p download-only-033984        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 10:44:14
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 10:44:14.232724  120186 out.go:291] Setting OutFile to fd 1 ...
	I0617 10:44:14.233017  120186 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 10:44:14.233027  120186 out.go:304] Setting ErrFile to fd 2...
	I0617 10:44:14.233034  120186 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 10:44:14.233230  120186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	W0617 10:44:14.233369  120186 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19084-112967/.minikube/config/config.json: open /home/jenkins/minikube-integration/19084-112967/.minikube/config/config.json: no such file or directory
	I0617 10:44:14.233945  120186 out.go:298] Setting JSON to true
	I0617 10:44:14.234857  120186 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1601,"bootTime":1718619453,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 10:44:14.234917  120186 start.go:139] virtualization: kvm guest
	I0617 10:44:14.237325  120186 out.go:97] [download-only-033984] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 10:44:14.238543  120186 out.go:169] MINIKUBE_LOCATION=19084
	W0617 10:44:14.237425  120186 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball: no such file or directory
	I0617 10:44:14.237459  120186 notify.go:220] Checking for updates...
	I0617 10:44:14.240863  120186 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 10:44:14.242091  120186 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 10:44:14.243266  120186 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 10:44:14.244382  120186 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0617 10:44:14.246449  120186 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0617 10:44:14.246663  120186 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 10:44:14.343260  120186 out.go:97] Using the kvm2 driver based on user configuration
	I0617 10:44:14.343285  120186 start.go:297] selected driver: kvm2
	I0617 10:44:14.343290  120186 start.go:901] validating driver "kvm2" against <nil>
	I0617 10:44:14.343644  120186 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 10:44:14.343760  120186 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19084-112967/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0617 10:44:14.358987  120186 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0617 10:44:14.359054  120186 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 10:44:14.359564  120186 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0617 10:44:14.359732  120186 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0617 10:44:14.359795  120186 cni.go:84] Creating CNI manager for ""
	I0617 10:44:14.359812  120186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0617 10:44:14.359825  120186 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 10:44:14.359889  120186 start.go:340] cluster config:
	{Name:download-only-033984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-033984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 10:44:14.360078  120186 iso.go:125] acquiring lock: {Name:mk4a199ad46ed9ee04de7b54caf7cc64218fe80c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 10:44:14.361923  120186 out.go:97] Downloading VM boot image ...
	I0617 10:44:14.361967  120186 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19084-112967/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso
	I0617 10:44:17.125987  120186 out.go:97] Starting "download-only-033984" primary control-plane node in "download-only-033984" cluster
	I0617 10:44:17.126019  120186 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0617 10:44:17.145801  120186 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0617 10:44:17.145823  120186 cache.go:56] Caching tarball of preloaded images
	I0617 10:44:17.145931  120186 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0617 10:44:17.147382  120186 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0617 10:44:17.147394  120186 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0617 10:44:17.178141  120186 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19084-112967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-033984 host does not exist
	  To start a cluster, run: "minikube start -p download-only-033984"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-033984
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (4.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-999061 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-999061 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.657086453s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (4.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-999061
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-999061: exit status 85 (58.337563ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-033984 | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC |                     |
	|         | -p download-only-033984        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC | 17 Jun 24 10:44 UTC |
	| delete  | -p download-only-033984        | download-only-033984 | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC | 17 Jun 24 10:44 UTC |
	| start   | -o=json --download-only        | download-only-999061 | jenkins | v1.33.1 | 17 Jun 24 10:44 UTC |                     |
	|         | -p download-only-999061        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 10:44:22
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 10:44:22.130404  120377 out.go:291] Setting OutFile to fd 1 ...
	I0617 10:44:22.130638  120377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 10:44:22.130646  120377 out.go:304] Setting ErrFile to fd 2...
	I0617 10:44:22.130650  120377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 10:44:22.130823  120377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 10:44:22.131401  120377 out.go:298] Setting JSON to true
	I0617 10:44:22.132311  120377 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1609,"bootTime":1718619453,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 10:44:22.132384  120377 start.go:139] virtualization: kvm guest
	I0617 10:44:22.134693  120377 out.go:97] [download-only-999061] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 10:44:22.136160  120377 out.go:169] MINIKUBE_LOCATION=19084
	I0617 10:44:22.134890  120377 notify.go:220] Checking for updates...
	I0617 10:44:22.138525  120377 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 10:44:22.139819  120377 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 10:44:22.141028  120377 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 10:44:22.142234  120377 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-999061 host does not exist
	  To start a cluster, run: "minikube start -p download-only-999061"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-999061
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-716953 --alsologtostderr --binary-mirror http://127.0.0.1:44727 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-716953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-716953
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (62.57s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-825945 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-825945 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.705617977s)
helpers_test.go:175: Cleaning up "offline-crio-825945" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-825945
--- PASS: TestOffline (62.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-465706
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-465706: exit status 85 (50.759817ms)

                                                
                                                
-- stdout --
	* Profile "addons-465706" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-465706"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-465706
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-465706: exit status 85 (51.841168ms)

                                                
                                                
-- stdout --
	* Profile "addons-465706" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-465706"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (143.25s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-465706 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-465706 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m23.247383564s)
--- PASS: TestAddons/Setup (143.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 20.785581ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-zmgvf" [779a673e-bb16-4cb8-ba45-1f77abb09f84] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005112155s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8jk6d" [8e3ec5f6-818e-4deb-a7b8-8c6c898c12a7] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005663127s
addons_test.go:342: (dbg) Run:  kubectl --context addons-465706 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-465706 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-465706 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.005078759s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-465706 ip
2024/06/17 10:47:06 [DEBUG] GET http://192.168.39.165:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-465706 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.02s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8nv99" [07d8a447-8293-40a3-8ebd-04dfea0ee6ac] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007376546s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-465706
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-465706: (5.770123245s)
--- PASS: TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.85s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 18.278439ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-c55qr" [b7ac1365-80b4-4f6b-956f-9c3579810596] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.00691098s
addons_test.go:475: (dbg) Run:  kubectl --context addons-465706 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-465706 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.223840799s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-465706 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (78.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 5.978224ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-465706 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-465706 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b05ef183-0868-4436-9425-29b18b3865a3] Pending
helpers_test.go:344: "task-pv-pod" [b05ef183-0868-4436-9425-29b18b3865a3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b05ef183-0868-4436-9425-29b18b3865a3] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.009746238s
addons_test.go:586: (dbg) Run:  kubectl --context addons-465706 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-465706 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-465706 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-465706 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-465706 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-465706 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-465706 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5e1f2089-c956-4e7e-bd12-e0ffda4c75c2] Pending
helpers_test.go:344: "task-pv-pod-restore" [5e1f2089-c956-4e7e-bd12-e0ffda4c75c2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5e1f2089-c956-4e7e-bd12-e0ffda4c75c2] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00474348s
addons_test.go:628: (dbg) Run:  kubectl --context addons-465706 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-465706 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-465706 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-465706 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-465706 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.685616988s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-465706 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (78.24s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-465706 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-465706 --alsologtostderr -v=1: (1.311593393s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7fc69f7444-b25bd" [426684ff-406b-40d7-a06f-5aab3179e257] Pending
helpers_test.go:344: "headlamp-7fc69f7444-b25bd" [426684ff-406b-40d7-a06f-5aab3179e257] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7fc69f7444-b25bd" [426684ff-406b-40d7-a06f-5aab3179e257] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004032651s
--- PASS: TestAddons/parallel/Headlamp (15.32s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-jjxll" [d3e639cc-5127-40c2-a2bb-aacb3d0f3619] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004523723s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-465706
--- PASS: TestAddons/parallel/CloudSpanner (6.91s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (61.38s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-465706 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-465706 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-465706 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [30fd0e1b-fb22-4631-b82d-38b1825bbd61] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [30fd0e1b-fb22-4631-b82d-38b1825bbd61] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [30fd0e1b-fb22-4631-b82d-38b1825bbd61] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 11.004075919s
addons_test.go:992: (dbg) Run:  kubectl --context addons-465706 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-465706 ssh "cat /opt/local-path-provisioner/pvc-f296beee-9e3b-4086-a049-00efb1334af0_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-465706 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-465706 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-465706 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-amd64 -p addons-465706 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.563660757s)
--- PASS: TestAddons/parallel/LocalPath (61.38s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.74s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-qmfbl" [6fa18993-49a4-4224-9ae5-23eebbfb150c] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00558076s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-465706
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.74s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-phsmj" [744b82c4-03d4-4e46-b250-37034c66f93a] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003459819s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-465706 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-465706 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (89.95s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-212761 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-212761 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m28.715298366s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-212761 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-212761 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-212761 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-212761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-212761
--- PASS: TestCertOptions (89.95s)

                                                
                                    
x
+
TestCertExpiration (328.02s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-514753 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-514753 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m24.885613587s)
E0617 11:48:57.398121  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-514753 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0617 11:51:51.169984  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-514753 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m2.15417549s)
helpers_test.go:175: Cleaning up "cert-expiration-514753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-514753
--- PASS: TestCertExpiration (328.02s)

                                                
                                    
x
+
TestForceSystemdFlag (44.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-855883 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-855883 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (42.927300416s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-855883 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-855883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-855883
--- PASS: TestForceSystemdFlag (44.11s)

                                                
                                    
x
+
TestForceSystemdEnv (71.34s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-866173 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-866173 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.367411541s)
helpers_test.go:175: Cleaning up "force-systemd-env-866173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-866173
--- PASS: TestForceSystemdEnv (71.34s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.1s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.10s)

                                                
                                    
x
+
TestErrorSpam/setup (43.24s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-696382 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-696382 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-696382 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-696382 --driver=kvm2  --container-runtime=crio: (43.24415991s)
--- PASS: TestErrorSpam/setup (43.24s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-696382 --log_dir /tmp/nospam-696382 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-696382 --log_dir /tmp/nospam-696382 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-696382 --log_dir /tmp/nospam-696382 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-696382 --log_dir /tmp/nospam-696382 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-696382 --log_dir /tmp/nospam-696382 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-696382 --log_dir /tmp/nospam-696382 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-696382 --log_dir /tmp/nospam-696382 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-696382 --log_dir /tmp/nospam-696382 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-696382 --log_dir /tmp/nospam-696382 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-696382 --log_dir /tmp/nospam-696382 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-696382 --log_dir /tmp/nospam-696382 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-696382 --log_dir /tmp/nospam-696382 unpause
--- PASS: TestErrorSpam/unpause (1.59s)

                                                
                                    
x
+
TestErrorSpam/stop (5.79s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-696382 --log_dir /tmp/nospam-696382 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-696382 --log_dir /tmp/nospam-696382 stop: (2.302454481s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-696382 --log_dir /tmp/nospam-696382 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-696382 --log_dir /tmp/nospam-696382 stop: (2.005260938s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-696382 --log_dir /tmp/nospam-696382 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-696382 --log_dir /tmp/nospam-696382 stop: (1.484425272s)
--- PASS: TestErrorSpam/stop (5.79s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19084-112967/.minikube/files/etc/test/nested/copy/120174/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-303428 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0617 10:56:51.170089  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 10:56:51.176253  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 10:56:51.186517  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 10:56:51.206847  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 10:56:51.247192  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 10:56:51.327552  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 10:56:51.488024  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 10:56:51.808729  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 10:56:52.449727  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 10:56:53.730205  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 10:56:56.292107  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 10:57:01.412322  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 10:57:11.653410  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-303428 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (54.168487329s)
--- PASS: TestFunctional/serial/StartWithProxy (54.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.79s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-303428 --alsologtostderr -v=8
E0617 10:57:32.133689  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-303428 --alsologtostderr -v=8: (39.792956843s)
functional_test.go:659: soft start took 39.793788036s for "functional-303428" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.79s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-303428 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-303428 cache add registry.k8s.io/pause:3.1: (1.098098413s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-303428 cache add registry.k8s.io/pause:3.3: (1.134317584s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 cache add registry.k8s.io/pause:latest
E0617 10:58:13.094796  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-303428 cache add registry.k8s.io/pause:latest: (1.016223731s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-303428 /tmp/TestFunctionalserialCacheCmdcacheadd_local547180979/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 cache add minikube-local-cache-test:functional-303428
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 cache delete minikube-local-cache-test:functional-303428
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-303428
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-303428 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (203.620633ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 kubectl -- --context functional-303428 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-303428 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-303428 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-303428 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.054402472s)
functional_test.go:757: restart took 32.054512281s for "functional-303428" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.05s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-303428 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-303428 logs: (1.443415873s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 logs --file /tmp/TestFunctionalserialLogsFileCmd652081695/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-303428 logs --file /tmp/TestFunctionalserialLogsFileCmd652081695/001/logs.txt: (1.418894273s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-303428 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-303428
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-303428: exit status 115 (279.374776ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.25:30422 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-303428 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-303428 config get cpus: exit status 14 (56.028143ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-303428 config get cpus: exit status 14 (59.660322ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (28.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-303428 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-303428 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 128749: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (28.79s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-303428 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-303428 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (180.138899ms)

                                                
                                                
-- stdout --
	* [functional-303428] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19084
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 10:58:58.266632  128317 out.go:291] Setting OutFile to fd 1 ...
	I0617 10:58:58.267837  128317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 10:58:58.267908  128317 out.go:304] Setting ErrFile to fd 2...
	I0617 10:58:58.267920  128317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 10:58:58.268386  128317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 10:58:58.269277  128317 out.go:298] Setting JSON to false
	I0617 10:58:58.270994  128317 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2485,"bootTime":1718619453,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 10:58:58.271505  128317 start.go:139] virtualization: kvm guest
	I0617 10:58:58.273688  128317 out.go:177] * [functional-303428] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 10:58:58.275136  128317 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 10:58:58.275219  128317 notify.go:220] Checking for updates...
	I0617 10:58:58.276933  128317 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 10:58:58.278456  128317 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 10:58:58.280648  128317 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 10:58:58.282126  128317 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 10:58:58.283597  128317 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 10:58:58.285522  128317 config.go:182] Loaded profile config "functional-303428": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 10:58:58.285920  128317 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:58:58.285972  128317 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:58:58.305942  128317 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
	I0617 10:58:58.306383  128317 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:58:58.306968  128317 main.go:141] libmachine: Using API Version  1
	I0617 10:58:58.306990  128317 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:58:58.307340  128317 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:58:58.307685  128317 main.go:141] libmachine: (functional-303428) Calling .DriverName
	I0617 10:58:58.307947  128317 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 10:58:58.308306  128317 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:58:58.308341  128317 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:58:58.329836  128317 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I0617 10:58:58.330681  128317 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:58:58.331308  128317 main.go:141] libmachine: Using API Version  1
	I0617 10:58:58.331329  128317 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:58:58.331740  128317 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:58:58.331949  128317 main.go:141] libmachine: (functional-303428) Calling .DriverName
	I0617 10:58:58.379277  128317 out.go:177] * Using the kvm2 driver based on existing profile
	I0617 10:58:58.381118  128317 start.go:297] selected driver: kvm2
	I0617 10:58:58.381136  128317 start.go:901] validating driver "kvm2" against &{Name:functional-303428 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-303428 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.25 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 10:58:58.381235  128317 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 10:58:58.383386  128317 out.go:177] 
	W0617 10:58:58.385278  128317 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0617 10:58:58.386764  128317 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-303428 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-303428 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-303428 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (162.372451ms)

                                                
                                                
-- stdout --
	* [functional-303428] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19084
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 10:58:59.804732  128590 out.go:291] Setting OutFile to fd 1 ...
	I0617 10:58:59.805027  128590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 10:58:59.805037  128590 out.go:304] Setting ErrFile to fd 2...
	I0617 10:58:59.805044  128590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 10:58:59.805458  128590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 10:58:59.806176  128590 out.go:298] Setting JSON to false
	I0617 10:58:59.807493  128590 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2487,"bootTime":1718619453,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 10:58:59.807584  128590 start.go:139] virtualization: kvm guest
	I0617 10:58:59.810203  128590 out.go:177] * [functional-303428] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0617 10:58:59.811636  128590 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 10:58:59.811712  128590 notify.go:220] Checking for updates...
	I0617 10:58:59.812898  128590 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 10:58:59.814270  128590 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 10:58:59.815629  128590 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 10:58:59.816967  128590 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 10:58:59.818223  128590 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 10:58:59.820134  128590 config.go:182] Loaded profile config "functional-303428": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 10:58:59.820793  128590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:58:59.820942  128590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:58:59.843525  128590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37249
	I0617 10:58:59.844016  128590 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:58:59.844703  128590 main.go:141] libmachine: Using API Version  1
	I0617 10:58:59.844729  128590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:58:59.845137  128590 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:58:59.845359  128590 main.go:141] libmachine: (functional-303428) Calling .DriverName
	I0617 10:58:59.845676  128590 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 10:58:59.846136  128590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 10:58:59.846184  128590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 10:58:59.867916  128590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0617 10:58:59.868473  128590 main.go:141] libmachine: () Calling .GetVersion
	I0617 10:58:59.869028  128590 main.go:141] libmachine: Using API Version  1
	I0617 10:58:59.869057  128590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 10:58:59.869488  128590 main.go:141] libmachine: () Calling .GetMachineName
	I0617 10:58:59.869698  128590 main.go:141] libmachine: (functional-303428) Calling .DriverName
	I0617 10:58:59.907722  128590 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0617 10:58:59.909068  128590 start.go:297] selected driver: kvm2
	I0617 10:58:59.909085  128590 start.go:901] validating driver "kvm2" against &{Name:functional-303428 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-303428 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.25 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 10:58:59.909221  128590 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 10:58:59.911698  128590 out.go:177] 
	W0617 10:58:59.912893  128590 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0617 10:58:59.914000  128590 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-303428 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-303428 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-tzkrn" [9768b43f-494d-4490-9965-2fd3cca08cac] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-tzkrn" [9768b43f-494d-4490-9965-2fd3cca08cac] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004769365s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.25:31904
functional_test.go:1671: http://192.168.39.25:31904: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-tzkrn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.25:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.25:31904
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [641a6da4-eb22-46d6-913f-64e76fc641c5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.476815827s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-303428 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-303428 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-303428 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-303428 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-303428 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c4b620bb-b3cb-4f08-aec5-1dac3822924f] Pending
helpers_test.go:344: "sp-pod" [c4b620bb-b3cb-4f08-aec5-1dac3822924f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c4b620bb-b3cb-4f08-aec5-1dac3822924f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004366322s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-303428 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-303428 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-303428 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3987185c-4dc1-49d4-b75b-f8965b026042] Pending
helpers_test.go:344: "sp-pod" [3987185c-4dc1-49d4-b75b-f8965b026042] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00439329s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-303428 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.44s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh -n functional-303428 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 cp functional-303428:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2932342629/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh -n functional-303428 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh -n functional-303428 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-303428 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-xfjrj" [07473694-2738-498d-ae75-fe865f9b6e6b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-xfjrj" [07473694-2738-498d-ae75-fe865f9b6e6b] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.014668063s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-303428 exec mysql-64454c8b5c-xfjrj -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-303428 exec mysql-64454c8b5c-xfjrj -- mysql -ppassword -e "show databases;": exit status 1 (221.576554ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-303428 exec mysql-64454c8b5c-xfjrj -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-303428 exec mysql-64454c8b5c-xfjrj -- mysql -ppassword -e "show databases;": exit status 1 (227.204112ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-303428 exec mysql-64454c8b5c-xfjrj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/120174/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "sudo cat /etc/test/nested/copy/120174/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/120174.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "sudo cat /etc/ssl/certs/120174.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/120174.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "sudo cat /usr/share/ca-certificates/120174.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/1201742.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "sudo cat /etc/ssl/certs/1201742.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/1201742.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "sudo cat /usr/share/ca-certificates/1201742.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-303428 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-303428 ssh "sudo systemctl is-active docker": exit status 1 (227.360798ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-303428 ssh "sudo systemctl is-active containerd": exit status 1 (314.927332ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "262.921729ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "49.923292ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "320.642122ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "59.604018ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-303428 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-303428
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-303428
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-303428 image ls --format short --alsologtostderr:
I0617 10:59:30.848367  129923 out.go:291] Setting OutFile to fd 1 ...
I0617 10:59:30.848517  129923 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 10:59:30.848528  129923 out.go:304] Setting ErrFile to fd 2...
I0617 10:59:30.848534  129923 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 10:59:30.849005  129923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
I0617 10:59:30.850067  129923 config.go:182] Loaded profile config "functional-303428": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0617 10:59:30.850222  129923 config.go:182] Loaded profile config "functional-303428": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0617 10:59:30.850571  129923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0617 10:59:30.850635  129923 main.go:141] libmachine: Launching plugin server for driver kvm2
I0617 10:59:30.865552  129923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41837
I0617 10:59:30.866035  129923 main.go:141] libmachine: () Calling .GetVersion
I0617 10:59:30.866543  129923 main.go:141] libmachine: Using API Version  1
I0617 10:59:30.866573  129923 main.go:141] libmachine: () Calling .SetConfigRaw
I0617 10:59:30.866933  129923 main.go:141] libmachine: () Calling .GetMachineName
I0617 10:59:30.867152  129923 main.go:141] libmachine: (functional-303428) Calling .GetState
I0617 10:59:30.868916  129923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0617 10:59:30.868953  129923 main.go:141] libmachine: Launching plugin server for driver kvm2
I0617 10:59:30.883006  129923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41919
I0617 10:59:30.883441  129923 main.go:141] libmachine: () Calling .GetVersion
I0617 10:59:30.883958  129923 main.go:141] libmachine: Using API Version  1
I0617 10:59:30.883987  129923 main.go:141] libmachine: () Calling .SetConfigRaw
I0617 10:59:30.884273  129923 main.go:141] libmachine: () Calling .GetMachineName
I0617 10:59:30.884466  129923 main.go:141] libmachine: (functional-303428) Calling .DriverName
I0617 10:59:30.884660  129923 ssh_runner.go:195] Run: systemctl --version
I0617 10:59:30.884683  129923 main.go:141] libmachine: (functional-303428) Calling .GetSSHHostname
I0617 10:59:30.887282  129923 main.go:141] libmachine: (functional-303428) DBG | domain functional-303428 has defined MAC address 52:54:00:b7:2d:33 in network mk-functional-303428
I0617 10:59:30.887880  129923 main.go:141] libmachine: (functional-303428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:2d:33", ip: ""} in network mk-functional-303428: {Iface:virbr1 ExpiryTime:2024-06-17 11:56:50 +0000 UTC Type:0 Mac:52:54:00:b7:2d:33 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-303428 Clientid:01:52:54:00:b7:2d:33}
I0617 10:59:30.887909  129923 main.go:141] libmachine: (functional-303428) DBG | domain functional-303428 has defined IP address 192.168.39.25 and MAC address 52:54:00:b7:2d:33 in network mk-functional-303428
I0617 10:59:30.888119  129923 main.go:141] libmachine: (functional-303428) Calling .GetSSHPort
I0617 10:59:30.888252  129923 main.go:141] libmachine: (functional-303428) Calling .GetSSHKeyPath
I0617 10:59:30.888433  129923 main.go:141] libmachine: (functional-303428) Calling .GetSSHUsername
I0617 10:59:30.888558  129923 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/functional-303428/id_rsa Username:docker}
I0617 10:59:30.965913  129923 ssh_runner.go:195] Run: sudo crictl images --output json
I0617 10:59:31.014357  129923 main.go:141] libmachine: Making call to close driver server
I0617 10:59:31.014378  129923 main.go:141] libmachine: (functional-303428) Calling .Close
I0617 10:59:31.014681  129923 main.go:141] libmachine: Successfully made call to close driver server
I0617 10:59:31.014692  129923 main.go:141] libmachine: (functional-303428) DBG | Closing plugin on server side
I0617 10:59:31.014702  129923 main.go:141] libmachine: Making call to close connection to plugin binary
I0617 10:59:31.014715  129923 main.go:141] libmachine: Making call to close driver server
I0617 10:59:31.014724  129923 main.go:141] libmachine: (functional-303428) Calling .Close
I0617 10:59:31.014959  129923 main.go:141] libmachine: Successfully made call to close driver server
I0617 10:59:31.014977  129923 main.go:141] libmachine: Making call to close connection to plugin binary
I0617 10:59:31.015009  129923 main.go:141] libmachine: (functional-303428) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-303428 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-controller-manager | v1.30.1            | 25a1387cdab82 | 112MB  |
| registry.k8s.io/kube-scheduler          | v1.30.1            | a52dc94f0a912 | 63MB   |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/google-containers/addon-resizer  | functional-303428  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-303428  | e4e79157bdbc6 | 3.33kB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| localhost/my-image                      | functional-303428  | a78bb1f237fe4 | 1.47MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.30.1            | 91be940803172 | 118MB  |
| registry.k8s.io/kube-proxy              | v1.30.1            | 747097150317f | 85.9MB |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-303428 image ls --format table --alsologtostderr:
I0617 10:59:36.319691  130173 out.go:291] Setting OutFile to fd 1 ...
I0617 10:59:36.319802  130173 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 10:59:36.319813  130173 out.go:304] Setting ErrFile to fd 2...
I0617 10:59:36.319826  130173 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 10:59:36.319998  130173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
I0617 10:59:36.320544  130173 config.go:182] Loaded profile config "functional-303428": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0617 10:59:36.320665  130173 config.go:182] Loaded profile config "functional-303428": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0617 10:59:36.321049  130173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0617 10:59:36.321109  130173 main.go:141] libmachine: Launching plugin server for driver kvm2
I0617 10:59:36.336304  130173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46861
I0617 10:59:36.336773  130173 main.go:141] libmachine: () Calling .GetVersion
I0617 10:59:36.337366  130173 main.go:141] libmachine: Using API Version  1
I0617 10:59:36.337393  130173 main.go:141] libmachine: () Calling .SetConfigRaw
I0617 10:59:36.337762  130173 main.go:141] libmachine: () Calling .GetMachineName
I0617 10:59:36.338003  130173 main.go:141] libmachine: (functional-303428) Calling .GetState
I0617 10:59:36.339960  130173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0617 10:59:36.340013  130173 main.go:141] libmachine: Launching plugin server for driver kvm2
I0617 10:59:36.355121  130173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35707
I0617 10:59:36.355558  130173 main.go:141] libmachine: () Calling .GetVersion
I0617 10:59:36.356020  130173 main.go:141] libmachine: Using API Version  1
I0617 10:59:36.356043  130173 main.go:141] libmachine: () Calling .SetConfigRaw
I0617 10:59:36.356371  130173 main.go:141] libmachine: () Calling .GetMachineName
I0617 10:59:36.356607  130173 main.go:141] libmachine: (functional-303428) Calling .DriverName
I0617 10:59:36.356830  130173 ssh_runner.go:195] Run: systemctl --version
I0617 10:59:36.356884  130173 main.go:141] libmachine: (functional-303428) Calling .GetSSHHostname
I0617 10:59:36.359648  130173 main.go:141] libmachine: (functional-303428) DBG | domain functional-303428 has defined MAC address 52:54:00:b7:2d:33 in network mk-functional-303428
I0617 10:59:36.360034  130173 main.go:141] libmachine: (functional-303428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:2d:33", ip: ""} in network mk-functional-303428: {Iface:virbr1 ExpiryTime:2024-06-17 11:56:50 +0000 UTC Type:0 Mac:52:54:00:b7:2d:33 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-303428 Clientid:01:52:54:00:b7:2d:33}
I0617 10:59:36.360062  130173 main.go:141] libmachine: (functional-303428) DBG | domain functional-303428 has defined IP address 192.168.39.25 and MAC address 52:54:00:b7:2d:33 in network mk-functional-303428
I0617 10:59:36.360199  130173 main.go:141] libmachine: (functional-303428) Calling .GetSSHPort
I0617 10:59:36.360375  130173 main.go:141] libmachine: (functional-303428) Calling .GetSSHKeyPath
I0617 10:59:36.360529  130173 main.go:141] libmachine: (functional-303428) Calling .GetSSHUsername
I0617 10:59:36.360677  130173 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/functional-303428/id_rsa Username:docker}
I0617 10:59:36.474917  130173 ssh_runner.go:195] Run: sudo crictl images --output json
I0617 10:59:36.946602  130173 main.go:141] libmachine: Making call to close driver server
I0617 10:59:36.946621  130173 main.go:141] libmachine: (functional-303428) Calling .Close
I0617 10:59:36.946890  130173 main.go:141] libmachine: Successfully made call to close driver server
I0617 10:59:36.946910  130173 main.go:141] libmachine: Making call to close connection to plugin binary
I0617 10:59:36.946925  130173 main.go:141] libmachine: (functional-303428) DBG | Closing plugin on server side
I0617 10:59:36.946931  130173 main.go:141] libmachine: Making call to close driver server
I0617 10:59:36.946942  130173 main.go:141] libmachine: (functional-303428) Calling .Close
I0617 10:59:36.947161  130173 main.go:141] libmachine: Successfully made call to close driver server
I0617 10:59:36.947183  130173 main.go:141] libmachine: (functional-303428) DBG | Closing plugin on server side
I0617 10:59:36.947191  130173 main.go:141] libmachine: Making call to close connection to plugin binary
W0617 10:59:36.950519  130173 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 495b25ba-265c-4a5b-bb61-ca9ad5d44534
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-303428 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"65fed7e97f9c2934519ef749a02e4581187a582c56f97fe78196e82ff1f33963","repoDigests":["docker.io/library/27c76ed0cb29c8fd6012c4ed51980d49de6259671d04413eabc5556d798d32dd-tmp@sha256:82cd4d0214b0bc956b2da081dbea5d8d787987fbea409e240929156e42b81a1b"],"repoTags":[],"size":"1466017"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-303428"],"size":"34114467"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a
944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"e4e79157bdbc6da6f0175fcc7425ea0e2460f18b43b40e630cb05e5d8d8cf48e","repoDigests":["localhost/minikube-local-cache-test@sha256:5a2ec0f37fb56b2aa7525a9e28d270ca3a40a8b51da3f99b7122abdcbaa93720"],"repoTags":["localhost/minikube-local-cache-test:functional-303428"],"size":"3330"},{"id":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","repoDigests":["registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036","registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"63026504"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1be
a633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":["registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"85933465"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"24707
7"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"a78bb1f237fe47d614edf0f1957dd00496d74322c8fe2fa1f49b373a8cb037c2"
,"repoDigests":["localhost/my-image@sha256:6d9b9fc7b1a7b613fd2a99e245b6ede084710014cabfa9a4684691da4d6ef593"],"repoTags":["localhost/my-image:functional-303428"],"size":"1468599"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io
/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["
gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea","registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"117601759"},{"id":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52","registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.
30.1"],"size":"112170310"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-303428 image ls --format json --alsologtostderr:
I0617 10:59:36.178347  130138 out.go:291] Setting OutFile to fd 1 ...
I0617 10:59:36.178639  130138 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 10:59:36.178656  130138 out.go:304] Setting ErrFile to fd 2...
I0617 10:59:36.178663  130138 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 10:59:36.178991  130138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
I0617 10:59:36.179818  130138 config.go:182] Loaded profile config "functional-303428": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0617 10:59:36.180011  130138 config.go:182] Loaded profile config "functional-303428": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0617 10:59:36.180596  130138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0617 10:59:36.180664  130138 main.go:141] libmachine: Launching plugin server for driver kvm2
I0617 10:59:36.196885  130138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42927
I0617 10:59:36.197304  130138 main.go:141] libmachine: () Calling .GetVersion
I0617 10:59:36.197821  130138 main.go:141] libmachine: Using API Version  1
I0617 10:59:36.197842  130138 main.go:141] libmachine: () Calling .SetConfigRaw
I0617 10:59:36.198181  130138 main.go:141] libmachine: () Calling .GetMachineName
I0617 10:59:36.198380  130138 main.go:141] libmachine: (functional-303428) Calling .GetState
I0617 10:59:36.200515  130138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0617 10:59:36.200560  130138 main.go:141] libmachine: Launching plugin server for driver kvm2
I0617 10:59:36.218929  130138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45431
I0617 10:59:36.219424  130138 main.go:141] libmachine: () Calling .GetVersion
I0617 10:59:36.221475  130138 main.go:141] libmachine: Using API Version  1
I0617 10:59:36.221499  130138 main.go:141] libmachine: () Calling .SetConfigRaw
I0617 10:59:36.221841  130138 main.go:141] libmachine: () Calling .GetMachineName
I0617 10:59:36.222100  130138 main.go:141] libmachine: (functional-303428) Calling .DriverName
I0617 10:59:36.222268  130138 ssh_runner.go:195] Run: systemctl --version
I0617 10:59:36.222292  130138 main.go:141] libmachine: (functional-303428) Calling .GetSSHHostname
I0617 10:59:36.225303  130138 main.go:141] libmachine: (functional-303428) DBG | domain functional-303428 has defined MAC address 52:54:00:b7:2d:33 in network mk-functional-303428
I0617 10:59:36.225708  130138 main.go:141] libmachine: (functional-303428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:2d:33", ip: ""} in network mk-functional-303428: {Iface:virbr1 ExpiryTime:2024-06-17 11:56:50 +0000 UTC Type:0 Mac:52:54:00:b7:2d:33 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-303428 Clientid:01:52:54:00:b7:2d:33}
I0617 10:59:36.225721  130138 main.go:141] libmachine: (functional-303428) DBG | domain functional-303428 has defined IP address 192.168.39.25 and MAC address 52:54:00:b7:2d:33 in network mk-functional-303428
I0617 10:59:36.225883  130138 main.go:141] libmachine: (functional-303428) Calling .GetSSHPort
I0617 10:59:36.226060  130138 main.go:141] libmachine: (functional-303428) Calling .GetSSHKeyPath
I0617 10:59:36.226209  130138 main.go:141] libmachine: (functional-303428) Calling .GetSSHUsername
I0617 10:59:36.226334  130138 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/functional-303428/id_rsa Username:docker}
I0617 10:59:36.358683  130138 ssh_runner.go:195] Run: sudo crictl images --output json
I0617 10:59:36.943410  130138 main.go:141] libmachine: Making call to close driver server
I0617 10:59:36.943428  130138 main.go:141] libmachine: (functional-303428) Calling .Close
I0617 10:59:36.943759  130138 main.go:141] libmachine: Successfully made call to close driver server
I0617 10:59:36.943789  130138 main.go:141] libmachine: Making call to close connection to plugin binary
I0617 10:59:36.943798  130138 main.go:141] libmachine: Making call to close driver server
I0617 10:59:36.943813  130138 main.go:141] libmachine: (functional-303428) Calling .Close
I0617 10:59:36.944125  130138 main.go:141] libmachine: Successfully made call to close driver server
I0617 10:59:36.944162  130138 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-303428 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa
- registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "85933465"
- id: a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036
- registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "63026504"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea
- registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "117601759"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52
- registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "112170310"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-303428
size: "34114467"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: e4e79157bdbc6da6f0175fcc7425ea0e2460f18b43b40e630cb05e5d8d8cf48e
repoDigests:
- localhost/minikube-local-cache-test@sha256:5a2ec0f37fb56b2aa7525a9e28d270ca3a40a8b51da3f99b7122abdcbaa93720
repoTags:
- localhost/minikube-local-cache-test:functional-303428
size: "3330"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-303428 image ls --format yaml --alsologtostderr:
I0617 10:59:31.066569  129957 out.go:291] Setting OutFile to fd 1 ...
I0617 10:59:31.066865  129957 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 10:59:31.066878  129957 out.go:304] Setting ErrFile to fd 2...
I0617 10:59:31.066885  129957 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 10:59:31.067227  129957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
I0617 10:59:31.068071  129957 config.go:182] Loaded profile config "functional-303428": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0617 10:59:31.068222  129957 config.go:182] Loaded profile config "functional-303428": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0617 10:59:31.068725  129957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0617 10:59:31.068782  129957 main.go:141] libmachine: Launching plugin server for driver kvm2
I0617 10:59:31.084108  129957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34689
I0617 10:59:31.084590  129957 main.go:141] libmachine: () Calling .GetVersion
I0617 10:59:31.085213  129957 main.go:141] libmachine: Using API Version  1
I0617 10:59:31.085238  129957 main.go:141] libmachine: () Calling .SetConfigRaw
I0617 10:59:31.085631  129957 main.go:141] libmachine: () Calling .GetMachineName
I0617 10:59:31.085841  129957 main.go:141] libmachine: (functional-303428) Calling .GetState
I0617 10:59:31.087755  129957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0617 10:59:31.087803  129957 main.go:141] libmachine: Launching plugin server for driver kvm2
I0617 10:59:31.105356  129957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44783
I0617 10:59:31.105791  129957 main.go:141] libmachine: () Calling .GetVersion
I0617 10:59:31.106264  129957 main.go:141] libmachine: Using API Version  1
I0617 10:59:31.106288  129957 main.go:141] libmachine: () Calling .SetConfigRaw
I0617 10:59:31.106592  129957 main.go:141] libmachine: () Calling .GetMachineName
I0617 10:59:31.106784  129957 main.go:141] libmachine: (functional-303428) Calling .DriverName
I0617 10:59:31.107031  129957 ssh_runner.go:195] Run: systemctl --version
I0617 10:59:31.107061  129957 main.go:141] libmachine: (functional-303428) Calling .GetSSHHostname
I0617 10:59:31.109635  129957 main.go:141] libmachine: (functional-303428) DBG | domain functional-303428 has defined MAC address 52:54:00:b7:2d:33 in network mk-functional-303428
I0617 10:59:31.110184  129957 main.go:141] libmachine: (functional-303428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:2d:33", ip: ""} in network mk-functional-303428: {Iface:virbr1 ExpiryTime:2024-06-17 11:56:50 +0000 UTC Type:0 Mac:52:54:00:b7:2d:33 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-303428 Clientid:01:52:54:00:b7:2d:33}
I0617 10:59:31.110218  129957 main.go:141] libmachine: (functional-303428) DBG | domain functional-303428 has defined IP address 192.168.39.25 and MAC address 52:54:00:b7:2d:33 in network mk-functional-303428
I0617 10:59:31.110354  129957 main.go:141] libmachine: (functional-303428) Calling .GetSSHPort
I0617 10:59:31.110520  129957 main.go:141] libmachine: (functional-303428) Calling .GetSSHKeyPath
I0617 10:59:31.110648  129957 main.go:141] libmachine: (functional-303428) Calling .GetSSHUsername
I0617 10:59:31.110798  129957 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/functional-303428/id_rsa Username:docker}
I0617 10:59:31.195056  129957 ssh_runner.go:195] Run: sudo crictl images --output json
I0617 10:59:31.235558  129957 main.go:141] libmachine: Making call to close driver server
I0617 10:59:31.235575  129957 main.go:141] libmachine: (functional-303428) Calling .Close
I0617 10:59:31.235859  129957 main.go:141] libmachine: Successfully made call to close driver server
I0617 10:59:31.235886  129957 main.go:141] libmachine: Making call to close connection to plugin binary
I0617 10:59:31.235902  129957 main.go:141] libmachine: Making call to close driver server
I0617 10:59:31.235910  129957 main.go:141] libmachine: (functional-303428) Calling .Close
I0617 10:59:31.235927  129957 main.go:141] libmachine: (functional-303428) DBG | Closing plugin on server side
I0617 10:59:31.236270  129957 main.go:141] libmachine: Successfully made call to close driver server
I0617 10:59:31.236286  129957 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.068990416s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-303428
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (19.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-303428 /tmp/TestFunctionalparallelMountCmdany-port1579580006/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1718621938550066153" to /tmp/TestFunctionalparallelMountCmdany-port1579580006/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1718621938550066153" to /tmp/TestFunctionalparallelMountCmdany-port1579580006/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1718621938550066153" to /tmp/TestFunctionalparallelMountCmdany-port1579580006/001/test-1718621938550066153
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-303428 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (332.536694ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 17 10:58 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 17 10:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 17 10:58 test-1718621938550066153
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh cat /mount-9p/test-1718621938550066153
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-303428 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0a9f6f9b-9fda-4b60-9596-47a9e807b407] Pending
helpers_test.go:344: "busybox-mount" [0a9f6f9b-9fda-4b60-9596-47a9e807b407] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0a9f6f9b-9fda-4b60-9596-47a9e807b407] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0a9f6f9b-9fda-4b60-9596-47a9e807b407] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 16.008466458s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-303428 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-303428 /tmp/TestFunctionalparallelMountCmdany-port1579580006/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (19.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 image load --daemon gcr.io/google-containers/addon-resizer:functional-303428 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-303428 image load --daemon gcr.io/google-containers/addon-resizer:functional-303428 --alsologtostderr: (2.976144065s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.147731265s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-303428
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 image load --daemon gcr.io/google-containers/addon-resizer:functional-303428 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-303428 image load --daemon gcr.io/google-containers/addon-resizer:functional-303428 --alsologtostderr: (5.62956602s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-303428 /tmp/TestFunctionalparallelMountCmdspecific-port218652528/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-303428 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (291.48866ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-303428 /tmp/TestFunctionalparallelMountCmdspecific-port218652528/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-303428 ssh "sudo umount -f /mount-9p": exit status 1 (246.064184ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-303428 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-303428 /tmp/TestFunctionalparallelMountCmdspecific-port218652528/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 image save gcr.io/google-containers/addon-resizer:functional-303428 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-303428 image save gcr.io/google-containers/addon-resizer:functional-303428 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (4.087274038s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-303428 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1862375459/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-303428 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1862375459/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-303428 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1862375459/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-303428 ssh "findmnt -T" /mount1: exit status 1 (273.668972ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-303428 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-303428 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1862375459/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-303428 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1862375459/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-303428 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1862375459/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-303428 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-303428 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-9k58w" [b2f02071-f3af-48c2-9a19-a41d54271f2a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-9k58w" [b2f02071-f3af-48c2-9a19-a41d54271f2a] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004477631s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 image rm gcr.io/google-containers/addon-resizer:functional-303428 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-303428 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (3.812566175s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 image ls
2024/06/17 10:59:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-303428
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 image save --daemon gcr.io/google-containers/addon-resizer:functional-303428 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-303428 image save --daemon gcr.io/google-containers/addon-resizer:functional-303428 --alsologtostderr: (1.25955977s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-303428
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 service list
functional_test.go:1455: (dbg) Done: out/minikube-linux-amd64 -p functional-303428 service list: (1.297644547s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-linux-amd64 -p functional-303428 service list -o json: (1.275876999s)
functional_test.go:1490: Took "1.275992641s" to run "out/minikube-linux-amd64 -p functional-303428 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.25:32701
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-303428 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.25:32701
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-303428
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-303428
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-303428
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-064080 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0617 11:01:51.170143  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 11:02:18.856710  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-064080 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m22.609288182s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (203.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-064080 -- rollout status deployment/busybox: (2.64390121s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- exec busybox-fc5497c4f-89r9v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- exec busybox-fc5497c4f-gf9j7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- exec busybox-fc5497c4f-wbcxx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- exec busybox-fc5497c4f-89r9v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- exec busybox-fc5497c4f-gf9j7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- exec busybox-fc5497c4f-wbcxx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- exec busybox-fc5497c4f-89r9v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- exec busybox-fc5497c4f-gf9j7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- exec busybox-fc5497c4f-wbcxx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- exec busybox-fc5497c4f-89r9v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- exec busybox-fc5497c4f-89r9v -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- exec busybox-fc5497c4f-gf9j7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- exec busybox-fc5497c4f-gf9j7 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- exec busybox-fc5497c4f-wbcxx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064080 -- exec busybox-fc5497c4f-wbcxx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (43.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-064080 -v=7 --alsologtostderr
E0617 11:03:57.397833  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
E0617 11:03:57.403186  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
E0617 11:03:57.413466  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
E0617 11:03:57.433771  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
E0617 11:03:57.474101  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
E0617 11:03:57.554434  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
E0617 11:03:57.715393  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
E0617 11:03:58.035987  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
E0617 11:03:58.677126  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
E0617 11:03:59.958154  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
E0617 11:04:02.519172  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-064080 -v=7 --alsologtostderr: (43.134718078s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (43.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-064080 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp testdata/cp-test.txt ha-064080:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp ha-064080:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4010822866/001/cp-test_ha-064080.txt
E0617 11:04:07.640331  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp ha-064080:/home/docker/cp-test.txt ha-064080-m02:/home/docker/cp-test_ha-064080_ha-064080-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m02 "sudo cat /home/docker/cp-test_ha-064080_ha-064080-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp ha-064080:/home/docker/cp-test.txt ha-064080-m03:/home/docker/cp-test_ha-064080_ha-064080-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m03 "sudo cat /home/docker/cp-test_ha-064080_ha-064080-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp ha-064080:/home/docker/cp-test.txt ha-064080-m04:/home/docker/cp-test_ha-064080_ha-064080-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m04 "sudo cat /home/docker/cp-test_ha-064080_ha-064080-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp testdata/cp-test.txt ha-064080-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp ha-064080-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4010822866/001/cp-test_ha-064080-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp ha-064080-m02:/home/docker/cp-test.txt ha-064080:/home/docker/cp-test_ha-064080-m02_ha-064080.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080 "sudo cat /home/docker/cp-test_ha-064080-m02_ha-064080.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp ha-064080-m02:/home/docker/cp-test.txt ha-064080-m03:/home/docker/cp-test_ha-064080-m02_ha-064080-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m03 "sudo cat /home/docker/cp-test_ha-064080-m02_ha-064080-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp ha-064080-m02:/home/docker/cp-test.txt ha-064080-m04:/home/docker/cp-test_ha-064080-m02_ha-064080-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m04 "sudo cat /home/docker/cp-test_ha-064080-m02_ha-064080-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp testdata/cp-test.txt ha-064080-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp ha-064080-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4010822866/001/cp-test_ha-064080-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp ha-064080-m03:/home/docker/cp-test.txt ha-064080:/home/docker/cp-test_ha-064080-m03_ha-064080.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080 "sudo cat /home/docker/cp-test_ha-064080-m03_ha-064080.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp ha-064080-m03:/home/docker/cp-test.txt ha-064080-m02:/home/docker/cp-test_ha-064080-m03_ha-064080-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m02 "sudo cat /home/docker/cp-test_ha-064080-m03_ha-064080-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp ha-064080-m03:/home/docker/cp-test.txt ha-064080-m04:/home/docker/cp-test_ha-064080-m03_ha-064080-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m04 "sudo cat /home/docker/cp-test_ha-064080-m03_ha-064080-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp testdata/cp-test.txt ha-064080-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4010822866/001/cp-test_ha-064080-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt ha-064080:/home/docker/cp-test_ha-064080-m04_ha-064080.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080 "sudo cat /home/docker/cp-test_ha-064080-m04_ha-064080.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt ha-064080-m02:/home/docker/cp-test_ha-064080-m04_ha-064080-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m02 "sudo cat /home/docker/cp-test_ha-064080-m04_ha-064080-m02.txt"
E0617 11:04:17.881116  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 cp ha-064080-m04:/home/docker/cp-test.txt ha-064080-m03:/home/docker/cp-test_ha-064080-m04_ha-064080-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 ssh -n ha-064080-m03 "sudo cat /home/docker/cp-test_ha-064080-m04_ha-064080-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0617 11:06:41.243085  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.475972116s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 node delete m03 -v=7 --alsologtostderr
E0617 11:13:57.397582  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-064080 node delete m03 -v=7 --alsologtostderr: (16.588854138s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (347.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-064080 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0617 11:16:51.169570  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 11:18:57.397473  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
E0617 11:20:20.444325  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
E0617 11:21:51.169270  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-064080 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m47.153791099s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (347.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-064080 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-064080 --control-plane -v=7 --alsologtostderr: (1m11.75052328s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-064080 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (63.37s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-342014 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0617 11:23:57.397261  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-342014 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m3.367896187s)
--- PASS: TestJSONOutput/start/Command (63.37s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-342014 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-342014 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-342014 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-342014 --output=json --user=testUser: (7.380710949s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-120306 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-120306 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.393638ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6d643c67-8904-4f71-880e-207d5641b861","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-120306] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"01b35b7f-b388-49d1-bb63-40911c60ee7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19084"}}
	{"specversion":"1.0","id":"95a9cf24-8482-45a6-8560-4b0ef4fa59a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"61c26c65-8157-4284-a6d4-f2c1d61d8145","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig"}}
	{"specversion":"1.0","id":"6ba028ba-168f-4418-8c84-225593806fb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube"}}
	{"specversion":"1.0","id":"5386d98b-7974-435b-8c4f-d78492ec378c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5d540c4f-0e46-4498-895d-64e9c00a8596","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3248bea6-5a63-4cbb-8c53-ac5f4def624f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-120306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-120306
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (84.41s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-248017 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-248017 --driver=kvm2  --container-runtime=crio: (38.818178982s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-250942 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-250942 --driver=kvm2  --container-runtime=crio: (43.021134153s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-248017
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-250942
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-250942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-250942
helpers_test.go:175: Cleaning up "first-248017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-248017
--- PASS: TestMinikubeProfile (84.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-367455 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-367455 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.23448734s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-367455 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-367455 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-385422 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0617 11:26:51.169588  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-385422 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.732845077s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-385422 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-385422 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-367455 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-385422 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-385422 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-385422
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-385422: (1.276174445s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.73s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-385422
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-385422: (19.727151439s)
--- PASS: TestMountStart/serial/RestartStopped (20.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-385422 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-385422 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-353869 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0617 11:28:57.398162  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-353869 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m38.879940402s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-353869 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-353869 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-353869 -- rollout status deployment/busybox: (2.150010535s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-353869 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-353869 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-353869 -- exec busybox-fc5497c4f-9q9xp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-353869 -- exec busybox-fc5497c4f-vx9cr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-353869 -- exec busybox-fc5497c4f-9q9xp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-353869 -- exec busybox-fc5497c4f-vx9cr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-353869 -- exec busybox-fc5497c4f-9q9xp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-353869 -- exec busybox-fc5497c4f-vx9cr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.55s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-353869 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-353869 -- exec busybox-fc5497c4f-9q9xp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-353869 -- exec busybox-fc5497c4f-9q9xp -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-353869 -- exec busybox-fc5497c4f-vx9cr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-353869 -- exec busybox-fc5497c4f-vx9cr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (36.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-353869 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-353869 -v 3 --alsologtostderr: (35.472100325s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (36.03s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-353869 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 cp testdata/cp-test.txt multinode-353869:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 ssh -n multinode-353869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 cp multinode-353869:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2681374672/001/cp-test_multinode-353869.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 ssh -n multinode-353869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 cp multinode-353869:/home/docker/cp-test.txt multinode-353869-m02:/home/docker/cp-test_multinode-353869_multinode-353869-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 ssh -n multinode-353869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 ssh -n multinode-353869-m02 "sudo cat /home/docker/cp-test_multinode-353869_multinode-353869-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 cp multinode-353869:/home/docker/cp-test.txt multinode-353869-m03:/home/docker/cp-test_multinode-353869_multinode-353869-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 ssh -n multinode-353869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 ssh -n multinode-353869-m03 "sudo cat /home/docker/cp-test_multinode-353869_multinode-353869-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 cp testdata/cp-test.txt multinode-353869-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 ssh -n multinode-353869-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 cp multinode-353869-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2681374672/001/cp-test_multinode-353869-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 ssh -n multinode-353869-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 cp multinode-353869-m02:/home/docker/cp-test.txt multinode-353869:/home/docker/cp-test_multinode-353869-m02_multinode-353869.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 ssh -n multinode-353869-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 ssh -n multinode-353869 "sudo cat /home/docker/cp-test_multinode-353869-m02_multinode-353869.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 cp multinode-353869-m02:/home/docker/cp-test.txt multinode-353869-m03:/home/docker/cp-test_multinode-353869-m02_multinode-353869-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 ssh -n multinode-353869-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 ssh -n multinode-353869-m03 "sudo cat /home/docker/cp-test_multinode-353869-m02_multinode-353869-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 cp testdata/cp-test.txt multinode-353869-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 ssh -n multinode-353869-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 cp multinode-353869-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2681374672/001/cp-test_multinode-353869-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 ssh -n multinode-353869-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 cp multinode-353869-m03:/home/docker/cp-test.txt multinode-353869:/home/docker/cp-test_multinode-353869-m03_multinode-353869.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 ssh -n multinode-353869-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 ssh -n multinode-353869 "sudo cat /home/docker/cp-test_multinode-353869-m03_multinode-353869.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 cp multinode-353869-m03:/home/docker/cp-test.txt multinode-353869-m02:/home/docker/cp-test_multinode-353869-m03_multinode-353869-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 ssh -n multinode-353869-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 ssh -n multinode-353869-m02 "sudo cat /home/docker/cp-test_multinode-353869-m03_multinode-353869-m02.txt"
E0617 11:29:54.218314  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
--- PASS: TestMultiNode/serial/CopyFile (7.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-353869 node stop m03: (1.522629522s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-353869 status: exit status 7 (415.739844ms)

                                                
                                                
-- stdout --
	multinode-353869
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-353869-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-353869-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-353869 status --alsologtostderr: exit status 7 (418.91059ms)

                                                
                                                
-- stdout --
	multinode-353869
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-353869-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-353869-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:29:56.383986  147917 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:29:56.384223  147917 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:29:56.384235  147917 out.go:304] Setting ErrFile to fd 2...
	I0617 11:29:56.384239  147917 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:29:56.384415  147917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:29:56.384573  147917 out.go:298] Setting JSON to false
	I0617 11:29:56.384595  147917 mustload.go:65] Loading cluster: multinode-353869
	I0617 11:29:56.384725  147917 notify.go:220] Checking for updates...
	I0617 11:29:56.384953  147917 config.go:182] Loaded profile config "multinode-353869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:29:56.384967  147917 status.go:255] checking status of multinode-353869 ...
	I0617 11:29:56.385327  147917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:29:56.385367  147917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:29:56.400863  147917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33353
	I0617 11:29:56.401222  147917 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:29:56.401749  147917 main.go:141] libmachine: Using API Version  1
	I0617 11:29:56.401771  147917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:29:56.402130  147917 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:29:56.402362  147917 main.go:141] libmachine: (multinode-353869) Calling .GetState
	I0617 11:29:56.404005  147917 status.go:330] multinode-353869 host status = "Running" (err=<nil>)
	I0617 11:29:56.404021  147917 host.go:66] Checking if "multinode-353869" exists ...
	I0617 11:29:56.404283  147917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:29:56.404317  147917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:29:56.419027  147917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44319
	I0617 11:29:56.419421  147917 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:29:56.419868  147917 main.go:141] libmachine: Using API Version  1
	I0617 11:29:56.419894  147917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:29:56.420170  147917 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:29:56.420354  147917 main.go:141] libmachine: (multinode-353869) Calling .GetIP
	I0617 11:29:56.423002  147917 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:29:56.423360  147917 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:29:56.423396  147917 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:29:56.423596  147917 host.go:66] Checking if "multinode-353869" exists ...
	I0617 11:29:56.423893  147917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:29:56.423935  147917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:29:56.438011  147917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44077
	I0617 11:29:56.438354  147917 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:29:56.438728  147917 main.go:141] libmachine: Using API Version  1
	I0617 11:29:56.438765  147917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:29:56.439078  147917 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:29:56.439252  147917 main.go:141] libmachine: (multinode-353869) Calling .DriverName
	I0617 11:29:56.439422  147917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:29:56.439440  147917 main.go:141] libmachine: (multinode-353869) Calling .GetSSHHostname
	I0617 11:29:56.441918  147917 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:29:56.442326  147917 main.go:141] libmachine: (multinode-353869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ed:f7", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:27:41 +0000 UTC Type:0 Mac:52:54:00:ef:ed:f7 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-353869 Clientid:01:52:54:00:ef:ed:f7}
	I0617 11:29:56.442345  147917 main.go:141] libmachine: (multinode-353869) DBG | domain multinode-353869 has defined IP address 192.168.39.17 and MAC address 52:54:00:ef:ed:f7 in network mk-multinode-353869
	I0617 11:29:56.442478  147917 main.go:141] libmachine: (multinode-353869) Calling .GetSSHPort
	I0617 11:29:56.442629  147917 main.go:141] libmachine: (multinode-353869) Calling .GetSSHKeyPath
	I0617 11:29:56.442826  147917 main.go:141] libmachine: (multinode-353869) Calling .GetSSHUsername
	I0617 11:29:56.442995  147917 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/multinode-353869/id_rsa Username:docker}
	I0617 11:29:56.527137  147917 ssh_runner.go:195] Run: systemctl --version
	I0617 11:29:56.532937  147917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:29:56.546539  147917 kubeconfig.go:125] found "multinode-353869" server: "https://192.168.39.17:8443"
	I0617 11:29:56.546564  147917 api_server.go:166] Checking apiserver status ...
	I0617 11:29:56.546596  147917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:29:56.559272  147917 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup
	W0617 11:29:56.569115  147917 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0617 11:29:56.569168  147917 ssh_runner.go:195] Run: ls
	I0617 11:29:56.573990  147917 api_server.go:253] Checking apiserver healthz at https://192.168.39.17:8443/healthz ...
	I0617 11:29:56.578836  147917 api_server.go:279] https://192.168.39.17:8443/healthz returned 200:
	ok
	I0617 11:29:56.578862  147917 status.go:422] multinode-353869 apiserver status = Running (err=<nil>)
	I0617 11:29:56.578876  147917 status.go:257] multinode-353869 status: &{Name:multinode-353869 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:29:56.578912  147917 status.go:255] checking status of multinode-353869-m02 ...
	I0617 11:29:56.579334  147917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:29:56.579377  147917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:29:56.594801  147917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44669
	I0617 11:29:56.595267  147917 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:29:56.595810  147917 main.go:141] libmachine: Using API Version  1
	I0617 11:29:56.595836  147917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:29:56.596164  147917 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:29:56.596385  147917 main.go:141] libmachine: (multinode-353869-m02) Calling .GetState
	I0617 11:29:56.597938  147917 status.go:330] multinode-353869-m02 host status = "Running" (err=<nil>)
	I0617 11:29:56.597957  147917 host.go:66] Checking if "multinode-353869-m02" exists ...
	I0617 11:29:56.598362  147917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:29:56.598403  147917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:29:56.613851  147917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34727
	I0617 11:29:56.614317  147917 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:29:56.614837  147917 main.go:141] libmachine: Using API Version  1
	I0617 11:29:56.614859  147917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:29:56.615157  147917 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:29:56.615362  147917 main.go:141] libmachine: (multinode-353869-m02) Calling .GetIP
	I0617 11:29:56.617974  147917 main.go:141] libmachine: (multinode-353869-m02) DBG | domain multinode-353869-m02 has defined MAC address 52:54:00:3a:6c:48 in network mk-multinode-353869
	I0617 11:29:56.618358  147917 main.go:141] libmachine: (multinode-353869-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:6c:48", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:28:41 +0000 UTC Type:0 Mac:52:54:00:3a:6c:48 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-353869-m02 Clientid:01:52:54:00:3a:6c:48}
	I0617 11:29:56.618383  147917 main.go:141] libmachine: (multinode-353869-m02) DBG | domain multinode-353869-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:3a:6c:48 in network mk-multinode-353869
	I0617 11:29:56.618507  147917 host.go:66] Checking if "multinode-353869-m02" exists ...
	I0617 11:29:56.618784  147917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:29:56.618822  147917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:29:56.634380  147917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I0617 11:29:56.634912  147917 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:29:56.635403  147917 main.go:141] libmachine: Using API Version  1
	I0617 11:29:56.635428  147917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:29:56.635857  147917 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:29:56.636103  147917 main.go:141] libmachine: (multinode-353869-m02) Calling .DriverName
	I0617 11:29:56.636306  147917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:29:56.636332  147917 main.go:141] libmachine: (multinode-353869-m02) Calling .GetSSHHostname
	I0617 11:29:56.638853  147917 main.go:141] libmachine: (multinode-353869-m02) DBG | domain multinode-353869-m02 has defined MAC address 52:54:00:3a:6c:48 in network mk-multinode-353869
	I0617 11:29:56.639238  147917 main.go:141] libmachine: (multinode-353869-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:6c:48", ip: ""} in network mk-multinode-353869: {Iface:virbr1 ExpiryTime:2024-06-17 12:28:41 +0000 UTC Type:0 Mac:52:54:00:3a:6c:48 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-353869-m02 Clientid:01:52:54:00:3a:6c:48}
	I0617 11:29:56.639267  147917 main.go:141] libmachine: (multinode-353869-m02) DBG | domain multinode-353869-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:3a:6c:48 in network mk-multinode-353869
	I0617 11:29:56.639393  147917 main.go:141] libmachine: (multinode-353869-m02) Calling .GetSSHPort
	I0617 11:29:56.639568  147917 main.go:141] libmachine: (multinode-353869-m02) Calling .GetSSHKeyPath
	I0617 11:29:56.639719  147917 main.go:141] libmachine: (multinode-353869-m02) Calling .GetSSHUsername
	I0617 11:29:56.639872  147917 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19084-112967/.minikube/machines/multinode-353869-m02/id_rsa Username:docker}
	I0617 11:29:56.723345  147917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:29:56.737765  147917 status.go:257] multinode-353869-m02 status: &{Name:multinode-353869-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:29:56.737806  147917 status.go:255] checking status of multinode-353869-m03 ...
	I0617 11:29:56.738105  147917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0617 11:29:56.738148  147917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0617 11:29:56.756010  147917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44425
	I0617 11:29:56.756432  147917 main.go:141] libmachine: () Calling .GetVersion
	I0617 11:29:56.756899  147917 main.go:141] libmachine: Using API Version  1
	I0617 11:29:56.756923  147917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0617 11:29:56.757290  147917 main.go:141] libmachine: () Calling .GetMachineName
	I0617 11:29:56.757488  147917 main.go:141] libmachine: (multinode-353869-m03) Calling .GetState
	I0617 11:29:56.759032  147917 status.go:330] multinode-353869-m03 host status = "Stopped" (err=<nil>)
	I0617 11:29:56.759044  147917 status.go:343] host is not running, skipping remaining checks
	I0617 11:29:56.759049  147917 status.go:257] multinode-353869-m03 status: &{Name:multinode-353869-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-353869 node start m03 -v=7 --alsologtostderr: (26.546862007s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-353869 node delete m03: (1.630645359s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (160.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-353869 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0617 11:38:57.397217  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-353869 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m39.64160187s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-353869 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (160.17s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-353869
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-353869-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-353869-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (60.216414ms)

                                                
                                                
-- stdout --
	* [multinode-353869-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19084
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-353869-m02' is duplicated with machine name 'multinode-353869-m02' in profile 'multinode-353869'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-353869-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-353869-m03 --driver=kvm2  --container-runtime=crio: (44.619978611s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-353869
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-353869: exit status 80 (217.236159ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-353869 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-353869-m03 already exists in multinode-353869-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-353869-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.71s)

                                                
                                    
x
+
TestScheduledStopUnix (114.36s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-236838 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-236838 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.81204925s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-236838 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-236838 -n scheduled-stop-236838
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-236838 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-236838 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-236838 -n scheduled-stop-236838
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-236838
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-236838 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-236838
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-236838: exit status 7 (74.605588ms)

                                                
                                                
-- stdout --
	scheduled-stop-236838
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-236838 -n scheduled-stop-236838
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-236838 -n scheduled-stop-236838: exit status 7 (63.335897ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-236838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-236838
--- PASS: TestScheduledStopUnix (114.36s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (215.14s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2233384213 start -p running-upgrade-869541 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0617 11:46:34.219198  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 11:46:51.169956  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2233384213 start -p running-upgrade-869541 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m3.615741219s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-869541 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-869541 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m29.939092894s)
helpers_test.go:175: Cleaning up "running-upgrade-869541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-869541
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-869541: (1.148016145s)
--- PASS: TestRunningBinaryUpgrade (215.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-846787 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-846787 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (79.044724ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-846787] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19084
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (93.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-846787 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-846787 --driver=kvm2  --container-runtime=crio: (1m33.518006904s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-846787 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (93.77s)

                                                
                                    
x
+
TestPause/serial/Start (96.12s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-475894 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-475894 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m36.11673624s)
--- PASS: TestPause/serial/Start (96.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-253383 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-253383 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (100.272415ms)

                                                
                                                
-- stdout --
	* [false-253383] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19084
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:47:18.602527  156187 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:47:18.602773  156187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:47:18.602784  156187 out.go:304] Setting ErrFile to fd 2...
	I0617 11:47:18.602788  156187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:47:18.602951  156187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-112967/.minikube/bin
	I0617 11:47:18.603498  156187 out.go:298] Setting JSON to false
	I0617 11:47:18.604331  156187 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5386,"bootTime":1718619453,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0617 11:47:18.604390  156187 start.go:139] virtualization: kvm guest
	I0617 11:47:18.606843  156187 out.go:177] * [false-253383] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0617 11:47:18.608174  156187 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:47:18.608179  156187 notify.go:220] Checking for updates...
	I0617 11:47:18.609519  156187 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:47:18.610823  156187 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-112967/kubeconfig
	I0617 11:47:18.612017  156187 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-112967/.minikube
	I0617 11:47:18.613084  156187 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0617 11:47:18.614259  156187 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:47:18.615830  156187 config.go:182] Loaded profile config "NoKubernetes-846787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:47:18.615996  156187 config.go:182] Loaded profile config "pause-475894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0617 11:47:18.616117  156187 config.go:182] Loaded profile config "running-upgrade-869541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0617 11:47:18.616222  156187 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:47:18.655036  156187 out.go:177] * Using the kvm2 driver based on user configuration
	I0617 11:47:18.656643  156187 start.go:297] selected driver: kvm2
	I0617 11:47:18.656655  156187 start.go:901] validating driver "kvm2" against <nil>
	I0617 11:47:18.656665  156187 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:47:18.658443  156187 out.go:177] 
	W0617 11:47:18.659595  156187 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0617 11:47:18.660658  156187 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-253383 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-253383

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-253383

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-253383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-253383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-253383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-253383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-253383

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-253383

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-253383

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-253383

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-253383

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-253383" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-253383" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-253383

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253383"

                                                
                                                
----------------------- debugLogs end: false-253383 [took: 2.733610706s] --------------------------------
helpers_test.go:175: Cleaning up "false-253383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-253383
--- PASS: TestNetworkPlugins/group/false (2.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (64.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-846787 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-846787 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m3.358480388s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-846787 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-846787 status -o json: exit status 2 (252.661308ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-846787","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-846787
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-846787: (1.050911749s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (64.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-846787 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-846787 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.661021671s)
--- PASS: TestNoKubernetes/serial/Start (27.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-846787 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-846787 "sudo systemctl is-active --quiet service kubelet": exit status 1 (201.257042ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (32.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.924448829s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (17.970030012s)
--- PASS: TestNoKubernetes/serial/ProfileList (32.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-846787
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-846787: (1.374221776s)
--- PASS: TestNoKubernetes/serial/Stop (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (41.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-846787 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-846787 --driver=kvm2  --container-runtime=crio: (41.395878328s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (41.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (124.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.979526056 start -p stopped-upgrade-066761 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.979526056 start -p stopped-upgrade-066761 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m15.358772447s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.979526056 -p stopped-upgrade-066761 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.979526056 -p stopped-upgrade-066761 stop: (2.148804048s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-066761 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-066761 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.507950702s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (124.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-846787 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-846787 "sudo systemctl is-active --quiet service kubelet": exit status 1 (203.710834ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-066761
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (81.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-152830 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-152830 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (1m21.444314417s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (81.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (67.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-136195 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0617 11:53:40.448404  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-136195 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (1m7.993561425s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (67.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-152830 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a6bd0294-a0c4-411a-88c9-3c6d256ceb39] Pending
helpers_test.go:344: "busybox" [a6bd0294-a0c4-411a-88c9-3c6d256ceb39] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a6bd0294-a0c4-411a-88c9-3c6d256ceb39] Running
E0617 11:53:57.397590  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005768404s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-152830 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-152830 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-152830 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-136195 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [05a900e3-7714-4af1-ace9-eb03535da64a] Pending
helpers_test.go:344: "busybox" [05a900e3-7714-4af1-ace9-eb03535da64a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [05a900e3-7714-4af1-ace9-eb03535da64a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003678022s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-136195 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-136195 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-136195 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-991309 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-991309 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (56.887097651s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (692.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-152830 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-152830 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (11m31.851208948s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152830 -n no-preload-152830
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (692.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (565.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-136195 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0617 11:56:51.170196  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-136195 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (9m25.286063917s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-136195 -n embed-certs-136195
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (565.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-991309 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [30d10d01-c1de-435f-902e-5e90c86ab3f2] Pending
helpers_test.go:344: "busybox" [30d10d01-c1de-435f-902e-5e90c86ab3f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [30d10d01-c1de-435f-902e-5e90c86ab3f2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005049435s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-991309 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-991309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-991309 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-003661 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-003661 --alsologtostderr -v=3: (1.383612433s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003661 -n old-k8s-version-003661
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003661 -n old-k8s-version-003661: exit status 7 (62.878592ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-003661 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (435.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-991309 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0617 12:01:51.170072  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 12:03:14.219840  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
E0617 12:03:57.398114  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/functional-303428/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-991309 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (7m14.927900469s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (435.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-335949 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-335949 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (1m0.603322002s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (63.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-253383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-253383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m3.375384728s)
--- PASS: TestNetworkPlugins/group/auto/Start (63.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (67.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-253383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-253383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m7.114165106s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (67.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-335949 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-335949 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.772166875s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-335949 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-335949 --alsologtostderr -v=3: (7.331984153s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-335949 -n newest-cni-335949
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-335949 -n newest-cni-335949: exit status 7 (74.578853ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-335949 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (50.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-335949 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-335949 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (50.125459398s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-335949 -n newest-cni-335949
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (50.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-253383 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-253383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gpn5t" [37795c3a-8cc2-4f2f-b11f-239617e1e788] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gpn5t" [37795c3a-8cc2-4f2f-b11f-239617e1e788] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005213862s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-253383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-253383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-253383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-tcrvr" [2d85970b-a6c9-4b76-b0cf-8592ed4b3c1a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005282978s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-253383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-253383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m24.483215054s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-335949 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-335949 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-335949 -n newest-cni-335949
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-335949 -n newest-cni-335949: exit status 2 (263.152087ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-335949 -n newest-cni-335949
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-335949 -n newest-cni-335949: exit status 2 (246.930632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-335949 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-335949 -n newest-cni-335949
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-335949 -n newest-cni-335949
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (98.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-253383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-253383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m38.237945102s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (98.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-253383 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-253383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-4ncwk" [55e8c044-7a08-4168-922c-e68deda2f7c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0617 12:24:12.181068  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-4ncwk" [55e8c044-7a08-4168-922c-e68deda2f7c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003681846s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-253383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-253383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-253383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-253383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-253383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m26.457646112s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-991309 image list --format=json
E0617 12:24:59.841463  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.crt: no such file or directory
E0617 12:24:59.846821  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.crt: no such file or directory
E0617 12:24:59.857186  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.crt: no such file or directory
E0617 12:24:59.877486  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.crt: no such file or directory
E0617 12:24:59.917798  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-991309 --alsologtostderr -v=1
E0617 12:24:59.998364  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.crt: no such file or directory
E0617 12:25:00.159275  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.crt: no such file or directory
E0617 12:25:00.480027  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.crt: no such file or directory
E0617 12:25:01.121123  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-991309 --alsologtostderr -v=1: (1.322019395s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309: exit status 2 (265.915624ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309: exit status 2 (280.675513ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-991309 --alsologtostderr -v=1
E0617 12:25:02.402248  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-991309 -n default-k8s-diff-port-991309
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.58s)
E0617 12:26:21.766282  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (97.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-253383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0617 12:25:10.084621  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.crt: no such file or directory
E0617 12:25:13.622793  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/no-preload-152830/client.crt: no such file or directory
E0617 12:25:20.325184  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-253383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m37.336728697s)
--- PASS: TestNetworkPlugins/group/flannel/Start (97.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ql9cn" [b82794f0-ca08-4b94-9f36-959f92241a45] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005765939s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-253383 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-253383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-nr4zb" [a5cb6532-fdcb-4ecd-963d-76aa4b9f5c91] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0617 12:25:40.805438  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/old-k8s-version-003661/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-nr4zb" [a5cb6532-fdcb-4ecd-963d-76aa4b9f5c91] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005013824s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-253383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-253383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-253383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-253383 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-253383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-mb6kl" [38e1018e-230e-46d0-b4ab-dc306a4f2972] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-mb6kl" [38e1018e-230e-46d0-b4ab-dc306a4f2972] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004060071s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-253383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-253383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-253383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-253383 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (69.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-253383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-253383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m9.971376653s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (69.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-253383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hkm6l" [87147425-70dc-44aa-a0cb-6e71a362305a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hkm6l" [87147425-70dc-44aa-a0cb-6e71a362305a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004557797s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-253383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-253383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-253383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-h8qx9" [d8195a31-67eb-4fc6-8f96-47122baff485] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00437391s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-253383 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-253383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-kvdtk" [38b4e831-7d71-4197-b838-9f07015e1edf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0617 12:26:51.169549  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/addons-465706/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-kvdtk" [38b4e831-7d71-4197-b838-9f07015e1edf] Running
E0617 12:26:56.805705  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/client.crt: no such file or directory
E0617 12:26:56.810989  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/client.crt: no such file or directory
E0617 12:26:56.821316  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/client.crt: no such file or directory
E0617 12:26:56.841694  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/client.crt: no such file or directory
E0617 12:26:56.882285  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/client.crt: no such file or directory
E0617 12:26:56.962656  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/client.crt: no such file or directory
E0617 12:26:57.123071  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/client.crt: no such file or directory
E0617 12:26:57.443859  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/client.crt: no such file or directory
E0617 12:26:58.084959  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/client.crt: no such file or directory
E0617 12:26:59.365234  120174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-112967/.minikube/profiles/default-k8s-diff-port-991309/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00406674s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-253383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-253383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-253383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-253383 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-253383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gg46k" [57016ebd-e97a-4ec8-8648-5275157e1d1d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gg46k" [57016ebd-e97a-4ec8-8648-5275157e1d1d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004059448s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-253383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-253383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-253383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    

Test skip (37/314)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.1/cached-images 0
15 TestDownloadOnly/v1.30.1/binaries 0
16 TestDownloadOnly/v1.30.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
41 TestAddons/parallel/Volcano 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
258 TestStartStop/group/disable-driver-mounts 0.13
264 TestNetworkPlugins/group/kubenet 2.66
272 TestNetworkPlugins/group/cilium 3.85
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-960277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-960277
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-253383 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-253383

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-253383

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-253383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-253383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-253383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-253383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-253383

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-253383

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-253383

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-253383

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-253383

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-253383" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-253383" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-253383

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253383"

                                                
                                                
----------------------- debugLogs end: kubenet-253383 [took: 2.535258995s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-253383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-253383
--- SKIP: TestNetworkPlugins/group/kubenet (2.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-253383 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-253383

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-253383

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-253383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-253383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-253383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-253383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-253383

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-253383

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-253383

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-253383

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-253383

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-253383" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-253383

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-253383

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-253383

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-253383

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-253383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-253383" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-253383

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-253383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253383"

                                                
                                                
----------------------- debugLogs end: cilium-253383 [took: 3.695216601s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-253383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-253383
--- SKIP: TestNetworkPlugins/group/cilium (3.85s)

                                                
                                    
Copied to clipboard